Beyond Phase Gates – Agile R&D


Phased development with gate reviews has delivered many benefits to product development organizations. This process ensures business perspective is incorporated into product development. Gate reviews are generally led by senior managers. Their engagement drives organizational consensus and resource commitment. Gate reviews also guide the organization away from risky endeavors by focusing on predictability and return on investment.

However, stage gate process also introduces some challenges. This process is only applicable to large new product development efforts because of the additional overhead and required senior management time commitment. The gate review process is hard to implement on small sustaining engineering projects where the goal is fixing some issues or improving manufacturability. It is hard to apply this process to small technology development, advanced R&D or disruptive innovation projects. In many organization this means that small projects do not follow any consistent process. Hence managers loose visibility and control over the entire portfolio.

The stage gate process can be seen as waterfall model. This model assumes customer needs and functional requirements are completely known before development is started. For products with long development cycles or in rapidly changing markets, stage gates introduce many challenges:

  • Difficulty adapting to changing market needs
  • Limited business/marketing visibility into development
  • Late or uncoordinated requirements changes
  • Surprise issues and unexpected risks
  • Disconnected test plans and expensive testing

More importantly, this model reduces experimentation and prevents disruptive innovations from getting to market. As Clayton Christensen pointed out:

“The Stage-Gate system assumes that the proposed strategy is the right strategy; the problem is that except in the case of incremental innovations, the right strategy cannot be completely known in advance. The Stage-Gate system is not suited to the task of assessing innovations whose purpose is to build new growth businesses, but most companies continue to follow it simply because they see no alternative.”

So what is the solution? Implement Agile Development..,
Read More


Do Processes and Metrics Kill Innovation?


Many of the organizations we have visited often have the discussion about innovation vs. structure. The thought is that if we enforce processes and metrics on innovation, we will cripple it. The post Does Structure Kill Creativity? – K.L.Wightman lays out some interesting thoughts:

There are 26 letters in the alphabet and 12 notes in a musical scale, yet there are infinite ways to create a story and a song. Writing is like a science experiment: structure is the control, creativity is the variable.

Read More


Toyota aims to spice up cars with new development methods

We have discussed Toyota’s R&D management processes extensively (here and here). You might also remember Toyota’s recalls and problems a couple of years ago.  Toyota’s president (Mr. Toyoda) had announced that he would beef up the quality control processes to address these problems.  In fact, Toyota did announce an increased cycle of quality control (see Devil’s Advocate Policy).  We had discussed that the root cause of Toyota’s problems was increased system complexity, and that increased quality control would not be able to address underlying problems. This analysis was later validated by others. Now Toyota is talking about changing their R&D processes (See Toyota aims to spice up cars with new development methods)

image from engadget

Details are scarce, but the three key points made in the article, if implemented correctly, will definitely benefit Toyota.  The first is to reduce the bureaucratic overhead on the development process.  It is amazing to know that 80 to 100 executives were previously included in the approval loop.  It appears that Toyota will eliminate some of the reviews:

“The company will also give greater authority to chief engineers and slash the number of executives involved in the design review process — about 80 to 100 previously — to eliminate layers of decision-making.”

This can be a step in the right direction. As we had discussed earlier, reviews and post design quality control are rarely effective because most design decisions have already been made by then.  The additional effort needs to be to drive risk management decisions into upfront planning. Toyota seems to be addressing that concern as well:

Greater cooperation between the planning and design divisions will allow more design freedom…

Finally, the company is going to move focus away from near-term sales volume and growth to longer-term customer/product focus.  We had also pointed out this to be a key problem.

“The feeling at the time was, ‘If we build it, they will come,'” Toyoda told reporters at the automaker’s headquarters in central Japan today. “Instead of developing what customers would want next, we were making cars that would rake in sales.”

I love Toyota products and wish them luck.


When to rely on gut feelings

We have discussed papers and empirical data that show that reliance on gut feelings often produces sub-optimal results. Now we have a great explanation on why we should be careful about depending on intuition from the behavioral economist Dan Ariely (in the McKinsey Quareterly interview Dan Ariely on irrationality in the workplace):

One way to think about it is the following: imagine you stand on a field and you have a soccer ball and you kick it. You close your eyes and you kick it and then you open your eyes and you try to predict, where did the ball fall? Imagine you do this a thousand times; after a while you know exactly the relationship between your kick and where the ball is. Those are the conditions in which intuitions are correct—when we have plenty of experience and we have unambiguous feedback.

That’s learning, right? And we’re very good at it. But imagine something else happened. Imagine you close your eyes, you kick the ball, and then somebody picked it up and moved it 50 feet to the right or to the left or any kind of other random component. Then ask yourself, how good will you be in predicting where it would land? And the answer is: terrible.

The moment I add a random component, performance goes away very quickly. And the world in which executives live in is a world with lots of random elements. Now I don’t mean random that somebody really moves the ball, but you have a random component here, which you don’t control—it’s controlled by your competitors, the weather; there’s lots of things that are outside of your consideration. And it turns out, in those worlds, people are really bad.

So what is the solution?  We should experiment more and test our gut feelings before we go all out and implement a pervasive solution.

This actually, I think, brings us to the most important underutilized tools for management, which [are] experiments. You say, I can use my intuition, I can use data that tells me something about what might happen, but not for sure, or I can implement something and do an experiment. I am baffled by why companies don’t do more experiments.

I think the reason why many R&D executives I know do not experiment more is the lack of information – both about  factors driving the decision and potential impacts of the decision. For example, executives are normally forced to rely on gut feelings to decide future R&D investments.  It is difficult to experiment because R&D projects are interlinked. It is difficult to see the impact of changing one program on all the other linked programs.  Funding decisions also need to satisfy a multitude of often conflicting requirements.  There are no tools to quickly understand the impact of investments of staffing or on competitive position.  Even when information is available, it is normally at the wrong level of detail to actually make a difference.  We need tools to help executives experiment effectively in R&D management.


Behind the scenes at Toyota’s R&D center – Part I

As promised earlier in our case study on portfolio management, here are some insights into R&D management at Toyota.  As we had discussed in the past, Toyota has suffered quite a few setbacks this year and the fact that a lot these problems are because of increased complexity.  Toyota has been working hard to reverse some of the bad publicity it has received and recently invited some journalists to see what changes they are making to address the quality problems and may be drive up sales.  Autoblog was one of them and has two articles detailing their visit to Toyota (Deep-Dive: Behind the scenes at Toyota’s R&D center, Part 1 — AutoblogDeep-Dive: Behind the scenes at Toyota’s R&D center, Part Two — Autoblog).

In an effort to show transparency and a concerted effort to improve its quality and safety, for the first time in its history, Toyota has invited a small group of journalists and analysts into its research and safety facilities in Toyota City, Japan. As part of that select group of media, in the coming days, we’ll have a chance to peek behind the curtain, look at how its products are developed and tested and talk to Toyota executives, including CEO Akio Toyoda as we try to fully understand not only how things went so horribly wrong, but how the automaker plans to get back on track.

Lets dig into the article and see if we can explore about R&D management processes at Toyota and learn something about R&D management in general.
Overall, Toyota says it is going to increase quality control checks and train its engineers more.

For the most part, Toyota will continue creating cars and technology in the same manner it has in the past. However, the two major areas that will change include an expansion of the testing use cases beyond current methodologies and improvements in the training and development of its staff.

First of all, the chart below gives an idea about the magnitude of the R&D management challenge. To get a complex system like a cutting edge car get to market and compete successfully, Toyota has to fulfill multiple roles: Understand customer preferences, design cars that functionally and visually satisfy customer needs, work with part suppliers, develop cutting-edge new technology, mature technology, integrate in-house technology with supplier parts and test those systems.  Add to this complexity a diversity of locations, cultures and associated politics (Japan vs. US) and you see that this is not an easy management task by far.

Toyota’s R&D Organization (via Autoblog)

The overall process architecture is also somewhat intuitive – a chief engineer is assigned for each product milestone and he becomes the communication bridge between marketing, research, advanced development (components and subsystems) and the product development team.  It might be a PowerPoint artifact, but notice somewhat late involvement of production engineering – appears they are not using concurrent engineering processes or co-design for that matter.

All of this is overseen by a chief engineer for every project or vehicle. The job of the chief engineer is to oversee everything related to a project, taking inputs from product management, marketing and advanced engineering, then sending it on to the functional groups in their organization. Every automaker has their own version of this chief engineer, with a variety of titles. At General Motors, this would be the vehicle line executive, at Ford, it’s the chief nameplate engineer and the title at Honda is large project leader. Whatever the title, the end result is that this individual has ultimate responsibility for the end product.

The product development process is in stages with increased level of maturity.  There is requirements analysis, hardware design and software design at each stage.  Followed by a testing / evaluation process.

To a large degree, much of Toyota’s product development process isn’t really any different from what we have seen at other automakers. At its most basic level, it consists of three central phases, starting with requirements analysis. At the beginning of a project, whether it’s a new car or a just a new technology, the engineers determine what the product ultimately needs to do and how it should perform. Based on those requirements, a set of detailed specifications are produced. 

Product Development Process at Toyota (Via Autoblog)
It is a bit interesting that all the testing happens at the end of each phase as opposed to some form of concurrent engineering (may be the PowerPoint is just for illustration).  
Tomorrow, we will look at the changes Toyota is proposing to the R&D management process and discuss if they will make a difference.

Success of change (improvement) programs

Financial Times has another interesting take on success of process improvement projects in Management – Failing to cope with change? 

At the meeting, survey data were presented which suggested that, while 37 per cent of UK board members believed that their change programmes were generally successful, only 5 per cent of middle managers did. 

As we discussed in the recent post on key success factors for lasting process improvement results, managers have an inordinate amount of responsibility and power to drive success.

A confident leadership team may know that the right choices have been made. But it may take longer for this to become apparent to the rest of the organisation. Of course, there are two other possible explanations for this gap in perception: wishful thinking in the boardroom or plain bad communication. 

The keys to success (real not imaginary) remain the same: long-term focus, metrics, rewards/raises tied to metrics and manager involvement.  I guess the point the article makes may be important to – a consistent simple message from the executives (and board) to the teams:

Send a small number of simple messages again and again,” he advised. “And the larger the organisation, the simpler the message has to be.


Where Process-Improvement Projects Go Wrong

Here is a very interesting article from MIT Sloan Management Review on effectiveness on Where Process-Improvement Projects Go Wrong. I have read the results of a survey on cost cutting that more than 90% of the organizations surveyed failed to maintain savings for more than 3 years.  This article mentions another interesting result: more than 60% organizations adopting 6 sigma are dissatisfied with the results.

The underlying thesis is that even though process improves under the management attention in improvement projects, it revers back to original unless organizations puts in place concrete practices to make sure changes stick.  Normally, the pre-improvement practices exist because of culture / team member tendencies.  These are difficult to change and or maintain.  The article points out four lessons:

First, the extended involvement of a Six Sigma or other improvement expert is required if teams are to remain motivated, continue learning and maintain gains.

Second, performance appraisals need to be tied to successful implementation of improvement projects. Studies point out that raises, even in small amounts, can motivate team members to embrace new, better work practices. 

Third, improvement teams should have no more than six to nine members, and the timeline for launching a project should be no longer than six to eight weeks. The bigger the team, the greater the chance members will have competing interests and the harder it will be for them to agree on goals, especially after the improvement expert has moved on to a new project.

Fourth, executives need to directly participate in improvement projects, not just “support” them. Because it was in his best interests, the director in charge of the improvement projects at the aerospace company created the illusion that everything was great by communicating only about projects that were yielding excellent results. By observing the successes and failures of improvement programs firsthand, rather than relying on someone else’s interpretation, executives can make more accurate assessments as to which ones are worth continuing.

 I think the fourth point is probably the most important one.  I have seen many a six sigma projects failed because there was no real incentive to change, no pressure from the executive in-charge to drive performance and no clear way to measure performance to start with…


Pitfalls of Project Portfolio Management implementation

Computer World has an interesting IT Project Portfolio Management (PPM) article that is equally applicable to R&D management – just read R&D when they write IT :-).  They point out three dangerous myths of portfolio management:
  1. PPM is IT’s lookout
  2. Right tools drive PPM success
  3. The best starting place is PPM Best Practice
 In my experience, even the organizations that do PPM on R&D portfolios, often fall into some of these traps (I am not certain what fraction of companies have formalized PPM.  May be we will do a poll on this soon).  PPM is sometimes delegated to CTO or Engineering.  This has a negative impact on R&D team members because there is no clear customer for what they are developing.  
Another key problem is too much focus on tools and best practices.  I myself fell into this trap at one PPM implementation.  I was unaware of how little the organization new about their project portfolio (most were legacy R&D projects that had gone on for many years).  It was not wise to even attempt to implement PPM when there was no clear portfolio to begin with!

Do CEOs really know what they want from R&D?

Results of another industry survey – this one from Diamond – are actually the opposite of the IBM survey (CEOs want more creativity).  This survey shows that as the economy tumbled, companies were more focused on gaining market share rather than exploring white spaces and coming up with innovative products.

Looking for ways to recover from the recession, 57% of the companies surveyed by Diamond Management & Technology Consultants, Inc. plan to pursue a market penetration strategy that risks driving price competition and threatening profitability.

Innovation, often promoted as a panacea for surviving a downturn, is cited as a primary objective of only 16% of the respondents. And despite the economic climate, few companies (15%) see cost reduction (15%) and margin improvement (9%) as their primary objectives.

The other interesting finding of this study was that beyond the workforce there was little agreement amongst executives on what made them competitive!

Senior executives were asked what they believe are their companies’ top strengths and weaknesses. Most see their people as their major competitive strength–61% rated it first or second. But beyond that, there was surprisingly little consensus about what capabilities keep their companies competitive. Only 14% cited the “ability to deliver” on corporate programs and other initiatives as a top strength. Furthermore, customer understanding (10%) and market understanding (10%) ranked unexpectedly low as major strengths, given all the money companies invest in customer research and data analytics. 

 The press release then goes on to talk about Diamond’s service offerings and their value.  The lesson for me was about how little do these high level surveys actually produce.  In my experience, both innovation and incremental development are required.  Companies need to penetrate markets and enter new white spaces.  The actual task of achieving that requires hard work by R&D managers.  It is very difficult to get to that level of detail in a survey.

Why don’t businesses experiment more

In a very interesting column in the Harvard Business Review, Dan Ariely writes about why organizations are willing to listen to experts and consultants but not do some experiments themselves and find the best answer:

I think this irrational behavior stems from two sources. One is the nature of experiments themselves. As the people at the consumer goods firm pointed out, experiments require short-term losses for long-term gains. Companies (and people) are notoriously bad at making those trade-offs. Second, there’s the false sense of security that heeding experts provides. When we pay consultants, we get an answer from them and not a list of experiments to conduct. We tend to value answers over questions because answers allow us to take action, while questions mean that we need to keep thinking. Never mind that asking good questions and gathering evidence usually guides us to better answers.

This is a very interesting observation.  I have often wondered why people higher such highly paid consultants.  One point that Dan does not make is an ability to CYA – the consequences of failure are much lesser if someone else (an outside expert) made the decision.  I guess people are recognizing this:

Despite the fact that it goes against how business works, experimentation is making headway at some companies. Scott Cook, the founder of Intuit, tells me he’s trying to create a culture of experimentation in which failing is perfectly fine. Whatever happens, he tells his staff, you’re doing right because you’ve created evidence, which is better than anyone’s intuition. He says the organization is buzzing with experiments.