How to define Innovation?

We have discussed the problem surrounding definition of what constitutes innovation and what is invention or just engineering. The issue is important because funding and management of innovation is different from other R&D. Here is a paper in the Journal R&D Management that gives a background on what is innovation:

‘Innovation’ was defined by Schumpeter (1934) as the commercialisation of combinations of the following:
(i) new materials and components,
(ii) the introduction of new processes,
(iii) the opening of new markets,
(iv) the introduction of new organisational forms.
According to this definition, innovations are the composite of two worlds – namely, the technical world and the business world. When only a change in technology is involved, Schumpeter
terms this invention; when the business world is involved, it becomes an innovation (Janszen,
2000).

Another definition of Innovation is:

In this paper, innovation is defined as ‘the successful exploitation of new ideas incorporating new technologies, design and best practice’ (BIS, 2008).

This is what Peter Drucker had to say about it:

It is the means by which the entrepreneur either creates new wealth-producing resources or endows existing resources with enhanced potential for creating wealth – The Discipline of Innovation (HBR 1985).

Here is another definition – radical or disruptive innovation as opposed to incremental innovation:

Incremental innovation reinforces the capabilities of established organisations, while radical innovation forces them to ask a new set of questions, to draw on new technical and commercial skills and to use new problem-solving approaches (Tushman and Anderson, 1986; Burns and Stalker, 1966). Incremental and radical innovations require different organisational capabilities and may require different management processes.


International Product Development

As we have seen, global product development is a here to stay – whether organizations like it or not.  Managing virtual teams is not easy. The article The Practice of Global Product Development from MIT Sloan Review has interesting models and checklists for organizations considering international or global product development (GPD).

The first suggestion in the article is to deploy GPD in stages (start with process outsourcing, move to components and then to design).

The article also lays out key success factors for GPD.  I am going to rearrange and rephrase to make them a bit more succinct:

  1. Management Priority: Clearly, global R&D is a big challenge – it requires major organizational and cultural change.  None of it is possible without senior executive priority.
  2. Core Competence (Clear strategy): Clear understanding of what is core to company and what can be outsourced is also key.  I have seen many organizations that stumbled through outsourcing R&D and lost market share because of duplicate capabilities.
  3. Modularity (Process and Product): To outsource a portion of the work, it needs to be easily separable.  Modular processes and  products are clearly required for outsourcing.
  4. Infrastructure (Intellectual Property, Governance, Project Management, Data Quality, Change Management): Infrastructure is needed to manage global product development.  The organization needs to be able to control IP such that each location can work its piece and critical IP is not exposed unnecessarily.  Also, processes, tools and metrics need to be in place for virtual team management.  Finally, since GPD is a major change, change-management will be needed to make sure it succeeds.
  5. Collaborative Culture

Metrics: R&D Should Settle for Second Best

The article Metrics: R&D Should Settle for Second Best in CEB Views points out that it generally not worth putting in a lot of investment in developing new R&D metrics.  However, as we have seen, there is plenty of research out there that suggests what you measure will drive behavior of your R&D teams – so please keep that in mind.

The article points out that most R&D departments use very simple metrics:

These simplistic measurements might not necessarily be because simple metrics are the most effective, they might be because measuring the right thing is difficult to do.  For example, not one of the top metrics above addresses performance or maturity of R&D projects underway and how they compare with the expectations.  Even though this is hard to do, it might have a huge benefit to overall R&D management.

Overall, the four takes aways have two useful ones:

  1. Use qualitative metrics to evaluate early-stage investments: Very important  because it is hard (if not impossible) to value benefits of early-stage technologies – especially when they might impact many different product lines or would require other technologies to mature before they can be of use.
  2. Use business outcome targets to classify project types: I take this to mean that it is important to categorize the R&D pipeline and then measure them based on the category they fall into (some what related to the bullet above).
  3. Supplement business outcome metrics for accurate performance assessments: Idea being revenues/profits should not be all that drive decisions…
  4. Use metrics to motivate not intimidate: Easy to say, hard to do…

An Epidemic Of Failing To Manage Growth

The article An Epidemic Of Failing To Manage Growth in Forbes.com suggests that lot of the ill that befell companies like Toyota, Dell,  BP was because they grew too fast and did not manage growth because they were too profit driven.

Their chief executives appear to have unquestioningly accepted the Wall Street axiom that growth is the greatest corporate goal. Growth is always good, we hear. Bigger is always better. Companies either grow or die, and public companies must show ever increasing quarterly earnings.

The solution, according to the author is to manage risks from growth:

1. Conduct an annual growth risks audit as part of its budgeting and strategy processes. The audit’s results should be disseminated to all managers, so they can be sensitive and alert to early warning signals. Leaders must constantly convey what cannot be compromised by growth.
2. Have business unit leaders create independent cross-functional teams that report directly to them and are responsible for monitoring the risks of growth and implementing risk management and mitigation plans, which should take effect when predetermined alarms are activated. These teams cannot have conflicting responsibilities and should not be responsible for producing growth. The teams must be measured and rewarded for managing the risks of growth.
3. Base a meaningful percentage of the compensation of all senior leaders and management on successfully managing the risks of growth.

I am not sure I agree.  The problem is not really growing too fast – it is that there are no processes and tools to manage the type and volume of work that needs to be performed.  In fact, growth might actually be required to survive in many industries.

For example, for an R&D driven firm, how does one “manage the risk of growth?” Does one slow down product development?  If that happens, the firm might loose competitive positioning.

Does one address smaller market niches?  This is difficult to do in a product platform driven world. Most companies have learned to target the top niche first and then use the platform to cover a broader range of lower-end markets.  Just look at most cell phone providers like HTC or computer providers like Dell.  They all come out with high-end models at high prices and then migrate the technology to lower-end models.  So, the company rarely has a choice to slow down R&D.  If that is the case, what will growth audits do?

A better solution would be to invest in risk management processes and tools that identify and address risks introduced through increasing pace of R&D.

What do you think?


Putting a value on training

Training is extremely critical to most R&D organizations.  Toyota, as we have seen, has made improved training a key cornerstone of their quality improvement initiatives. The article Putting a value on training McKinsey Quarterly addresses how to measure effectiveness of training programs and develop a business case for deploying them.

…typically measure training’s impact by conducting surveys of attendees or counting how many employees complete courses rather than by assessing whether those employees learned anything that improved business performance.  This approach was, perhaps, acceptable when companies had money to spare. Now, most don’t. 

However, there is a need for more formal approaches to measure return-on-investment of training programs

Yet more and more, organizations need highly capable employees—90 percent of the respondents to a recent McKinsey Quarterly survey1 said that building capabilities was a top-ten priority for their organizations. Only a quarter, though, said that their programs are effective at improving performance measurably, and only 8 percent track the programs’ return on investment. 

The article talks about a detailed training program for BGCA (Boys and Girls Clubs of America).  Suffice it to say that the training was quite extensive and expensive.

BGCA therefore built its training program around those four subjects. The program involved both intensive classroom work and a project chosen by each local team; projects ranged from implementing new HR processes to deepening the impact of after-school programs. By the end of 2009, over 650 leaders from approximately 250 local organizations had been trained.

Here is the key message plan how you will measure effectiveness before launching an expensive training program.  This was much easier for a not for profit organization such as BGCA:

Because the program was designed to improve specific organizational-performance outcomes, the process of assessing its impact was straightforward. Where the leaders of local organizations had received training, BGCA compared their pre- and post-training results. More important, it also compared the post-training results against those of a control set of organizations, which had similar characteristics (such as budget size) but whose leaders had not yet gone through the training. 

FYI – the training was a success for BGCA.  They could measure the delta between trained and untrained organizations and actually calculate a return on investment.  The fact that they matched organizations to control sets, gave them the confidence that the results were relevant.  In for-profit organizations, metrics might be different but they must be measured before and after launching training programs.  Metrics and accountability is key to success of most campaigns.

Key take away:

In every case, companies must continually review and revise the links between skills, performance, and training programs. Typically, to determine which metrics should be improved, companies assess their current performance against industry benchmarks or their own goals. Like retailers and manufacturers, most other companies know what kinds of skills are tied to different areas of performance. So a good next step is to conduct an analysis of the relevant groups of employees to identify the most important specific skills for them (as BGCA did) and which performance-enhancing skills they currently lack. To get a clear read on the impact of a program, it’s crucial to control for the influence of external factors (for instance, the opening of new retail competitors in local markets) and of extraordinary internal factors (such as a scheduled plant shutdown for preventative maintenance). It’s also crucial to make appropriate comparisons within peer groups defined by preexisting performance bands or market types. 


Behind the scenes at Toyota’s R&D center – Part II

As we discussed yesterday, Toyota is out trying to mend its broken reputation.  When the light of public scrutiny is shining on a company, it is not good to have shallow marketing campaigns…  Unfortunately, that is what Toyota did with is Star Safety System.  Some actions such as starting field quality offices may have an indeterminate impact.  
On the other hand, as part of this process, they have allowed valuable and unprecedented access to a few journalist to their R&D organization.  This access included detailed briefings about R&D organizations/processes, what might be wrong with them and what they plan to do about them (Deep-Dive: Behind the scenes at Toyota’s R&D center, Part 1 — Autoblog, Deep-Dive: Behind the scenes at Toyota’s R&D center, Part Two — Autoblog).  We discussed existing organization and processes yesterday.  These R&D briefings do actually have some information about concrete steps that Toyota plans to take.  Lets dig into them and see what we can learn from them:

To start Toyota believes the root cause of quality problems is rapid growth and inability of the organization / culture / processes / training to keep up (from Autoblog):

 Takeshi Uchiyamada, Toyota’s executive vice-president for research and development, acknowledged during a group interview that overly aggressive growth over the past decade had contributed to the current problems. Branching into too many new market segments too quickly stretched Toyota’s resources, making it difficult to develop young engineers and technicians.
The assertion is that the lack of oversight is what led to problems – even some simple problems like floor mats sticking under pedals are blamed on inexperience:

The excessively lean organization at Toyota has led to younger staff not getting the necessary oversight to help them learn the nuances in engineering. Engineering is about much more than hard numbers and quantitative analysis – good engineers learn to think outside the box, examining ways their products could be used or misused in unexpected ways. In Toyota’s case, a prime example was the use of all-weather floor-mats. When the mats were developed, they were not intended to be used in conjunction with standard carpet floor-mats – and yet, that’s exactly what happened, leading to a spate of issues with mats being jammed under accelerator and brake pedals.

Based on this root cause analysis, Toyota is proposing several solutions.  From Edumunds:

Some of what Toyota is doing represents “enhancements” of what it has always done, but other steps, such consolidating groups whose areas of responsibility overlap considerably), are more than that.

One proposed solutions is to do more training:

Uchiyamada says that Toyota will be doing “more teaching of younger staff” in ways to examine issues and find innovative solutions.

The reason for the lack of training is thought to be an excessively lean or excessively flat organization:

Toyota now believes that its product development organization has become too flat over the years, with group managers having too many team members reporting to them. While many organizations have been trying to take out layers of management in recent years to improve organizational efficiency and lower costs, this strategy can be taken too far. One of the things that managers need to do is educate and develop the staff reporting to them.
Training (Via Autoblog)

Clearly, more training is generally better.  However, adding a layer of management is not necessarily the best approach to more training.  What Toyota needs is more agility to deal with evolving technologies and changing market conditions.  One of the problems Toyota and most other companies face is faster pace of product development because of increased competitive pressures.  In many cases, extra layers of management actually slow down organizational learning because they want to do things the old way.  More about this is below.  Even more importantly, Toyota lean culture has developed over decades.  How will adding extra layers of bureaucracy change the culture?  Will it take away from the good parts of lean as well as bad?

Another solution proposed is to drive quality innovation through a separate organization and allocation of extra hours:

Each Toyota employee must be “doing our work better,” and that the new Design Quality Innovation Division would be an important factor. He acknowledged the division and its work aren’t necessarily “eye-catching” from a PR standpoint, but are critical.

The four-week extension of lead time invoked by the Design Quality Innovation Division is a very important move, and represents a significant shift on Toyota’s part. This additional time period will not be used for new testing and evaluation per se, but rather gives the division a chance to play devil’s advocate, and look for potential issues from a broad perspective.

Clearly, adding extra time to quality control will hopefully find more flaws.  However, it is extremely important to measure the value of delivered by this additional time.  The devil is in the details.  Putting in extra time to check will not necessarily improve quality unless these inspections are tied into overall R&D process.  Even more importantly, setting up a new division is fraught with dangers – how will this division work with existing quality division, who will be responsible for what?  Who will have ultimate authority?
Another solution proposed is to slow down development and add schedule for additional quality control:

I am not sure how easy it will be implement this… Are customer requirements not driving the design to start with? Why is not customer’s viewpoint part of the overall development through marketing input and reviews?  Why is there a need for a separate evaluation?  What if this review finds problems with a supplier part?  Do they stop the entire development cycle and wait?

To that end, Toyota want to work closely with suppliers (co-design?):

Toyota will coordinate and cooperate more closely with suppliers. Rather than just letting them design to a specification, they will work more closely with Toyota engineers so that Toyota is familiar with their thought and evaluation processes. However, he also said that Toyota will likely be bringing more of the design and development work in house in the future.

    This is clearly a great idea.  However, the quality control and management bureaucracy steps added above might actually impede collaboration with suppliers.  How will Toyota manage these contradictions between processes?

    Finally, I remain convinced that the root causes is actually increased complexity.

    Regardless of whether you’re talking about the most basic transportation in the world (think: Tata Nano) or an advanced hybrid or electric vehicle, it would be impossible to meet the often contradictory requirements of customers and regulators without electronics and software. As the capabilities of electronic systems have increased, so, too, have the complexity of the interactions in these systems. Developing robust electronic control systems requires endless testing at every level, from the earliest software-in-the-loop simulation to full vehicle-in-the-loop evaluation.

    Here is a chart showing all the systems and subsystems linked together through the Engine Control Module (ECM).

    The engine management system alone consists of some 800,000 lines of “C” code split into 1,600 functional modules. Like most manufacturers today, Toyota is using software development tools like Matlab and Simulink to model functions and test them before ever generating a single line of code. Just as simulation is used for developing crash structures, mathematical models of the vehicle and powertrain components are used to check out the software before prototype electronics are produced.

    This complexity will only increase with time.  The pressures to deliver products more quickly will also increase – not reduce.  I am not sure adding time to product development will solve anything.  They really need to go back to the drawing board and identify new processes that will help them be more agile – not add more quality control…


    Behind the scenes at Toyota’s R&D center – Part I

    As promised earlier in our case study on portfolio management, here are some insights into R&D management at Toyota.  As we had discussed in the past, Toyota has suffered quite a few setbacks this year and the fact that a lot these problems are because of increased complexity.  Toyota has been working hard to reverse some of the bad publicity it has received and recently invited some journalists to see what changes they are making to address the quality problems and may be drive up sales.  Autoblog was one of them and has two articles detailing their visit to Toyota (Deep-Dive: Behind the scenes at Toyota’s R&D center, Part 1 — AutoblogDeep-Dive: Behind the scenes at Toyota’s R&D center, Part Two — Autoblog).

    In an effort to show transparency and a concerted effort to improve its quality and safety, for the first time in its history, Toyota has invited a small group of journalists and analysts into its research and safety facilities in Toyota City, Japan. As part of that select group of media, in the coming days, we’ll have a chance to peek behind the curtain, look at how its products are developed and tested and talk to Toyota executives, including CEO Akio Toyoda as we try to fully understand not only how things went so horribly wrong, but how the automaker plans to get back on track.

    Lets dig into the article and see if we can explore about R&D management processes at Toyota and learn something about R&D management in general.
    Overall, Toyota says it is going to increase quality control checks and train its engineers more.

    For the most part, Toyota will continue creating cars and technology in the same manner it has in the past. However, the two major areas that will change include an expansion of the testing use cases beyond current methodologies and improvements in the training and development of its staff.

    First of all, the chart below gives an idea about the magnitude of the R&D management challenge. To get a complex system like a cutting edge car get to market and compete successfully, Toyota has to fulfill multiple roles: Understand customer preferences, design cars that functionally and visually satisfy customer needs, work with part suppliers, develop cutting-edge new technology, mature technology, integrate in-house technology with supplier parts and test those systems.  Add to this complexity a diversity of locations, cultures and associated politics (Japan vs. US) and you see that this is not an easy management task by far.

    Toyota’s R&D Organization (via Autoblog)

    The overall process architecture is also somewhat intuitive – a chief engineer is assigned for each product milestone and he becomes the communication bridge between marketing, research, advanced development (components and subsystems) and the product development team.  It might be a PowerPoint artifact, but notice somewhat late involvement of production engineering – appears they are not using concurrent engineering processes or co-design for that matter.

    All of this is overseen by a chief engineer for every project or vehicle. The job of the chief engineer is to oversee everything related to a project, taking inputs from product management, marketing and advanced engineering, then sending it on to the functional groups in their organization. Every automaker has their own version of this chief engineer, with a variety of titles. At General Motors, this would be the vehicle line executive, at Ford, it’s the chief nameplate engineer and the title at Honda is large project leader. Whatever the title, the end result is that this individual has ultimate responsibility for the end product.

    The product development process is in stages with increased level of maturity.  There is requirements analysis, hardware design and software design at each stage.  Followed by a testing / evaluation process.

    To a large degree, much of Toyota’s product development process isn’t really any different from what we have seen at other automakers. At its most basic level, it consists of three central phases, starting with requirements analysis. At the beginning of a project, whether it’s a new car or a just a new technology, the engineers determine what the product ultimately needs to do and how it should perform. Based on those requirements, a set of detailed specifications are produced. 

    Product Development Process at Toyota (Via Autoblog)
    It is a bit interesting that all the testing happens at the end of each phase as opposed to some form of concurrent engineering (may be the PowerPoint is just for illustration).  
    Tomorrow, we will look at the changes Toyota is proposing to the R&D management process and discuss if they will make a difference.

    Update on the Portfolio Management case study

    Here is a quick update on the portfolio management case study: The actual cost of Kin failure, resulting (in my opinion) primarily from a failure to effectively manage a portfolio competing/complimentary of R&D projects to Microsoft is $700M+.  Check out posts on Portfolio Management on some processes, tools and learnings on how to avoid portfolio management errors.

    Via Engadget: “Here’s a tidbit in today’s Microsoft quarterly earnings that we previously overlooked: a $240 million cost of revenue ‘primarily… resulting from the discontinuation of the Kin phone, offset in part by decreased Xbox 360 console costs.’ In other words, the company took at least a quarter billion hit due to manufacturing, distribution, and support costs of the Kin (according to Microsoft’s definition of ‘cost of revenue’). We don’t know how much Xbox 360 offset, unfortunately, but we can add this figure to the $500 million Danger acquisition and the full marketing cost for the product (which we also don’t know, but anecdotally, it was on par with other major campaigns) to reach… well, at least $800 million in regret for the folks in Redmond.”


    Practical Advice for multi-location team development

    The article Practical Advice for Companies Betting on a Strategy of Globalization in Knowledge@Wharton has a  useful reminders about how to balance between local staff vs. head office staff.  As with anything, the suggestion is to start with a clear vision:

    The success of any international venture also depends on the human resources policy that the company pursues. ‘It took us years to create local talent,’ said Alvarez-Pallete. He believes it is essential ‘to decide what part of the business you are going to manage locally, and what it is that creates the most value.’ 

    The article recommends a balanced approach – hire local talent, but only put those people in-charge who have similar values as the head office.  The diversity they bring will enhance performance and the fact that they understand corporate culture, will make communication easier.

    As a result, he leaves in charge those people who are closest to the corporate culture and goals laid out by headquarters. Falcones added that ‘diversification contributes wealth in terms of human resources. It is one of the most important assets brought by globalization. It is incredible how much we can learn about good business practices when we can understand different cultures.’

    Not sure how easy it is to find these people – especially if the cultures / geopolitical between head office and local office are large…


    Optimizing Product Development

    The paper Balancing Development Costs and Sales to Optimize the Development Time of Product Line Additions in Journal of Product Innovation Management has some very interesting data for all R&D managers.  It has attempted to quantify and test gut feel R&D portfolio managers use in deciding on how to fund development projects – the results might surprise you.

    Development teams often use mental models to simplify development time decision making because a comprehensive empirical assessment of the trade-offs across the metrics of development time, development costs, proficiency in market-entry timing, and new product sales is simply not feasible. Surprisingly, these mental models have not been studied in prior research on the trade-offs among the aforementioned metrics. These mental models are important to consider, however, because they define reality, specify what team members attend to, and guide their decision making.

    Clearly, problem facing portfolio mangers is rather large – to balance between schedule, costs, market timing and sales (amongst other objectives).  There are no easy approaches to do this quantitatively and managers have to depend on their intuition.  However, the paper’s analysis shows that there is a significant cost to this simplification.  The analysis is based on a significant dataset (albeit one that might have some geographical / cultural bias as it is all from Netherlands).

    This survey-based study uses data from 115 completed NPD projects, all product line additions from manufacturers in The Netherlands, to demonstrate that there is a cost to simplifying decision making. Making development time decisions without taking into account the contingency between development time and proficiency in market-entry timing can be misleading, and using either a sales-maximization or a cost-minimization simplified decision-making model may result in a cost penalty or a sales loss.

    The results are surprising, but intuitive.  Instead of maximizing just one dimension, optimal results are obtained when a balance is achieved between several competing objectives:

    The results from this study show that the development time that maximizes new product profitability is longer than the time that maximizes new product sales and is shorter than the development time that minimizes development costs.

    If one is forced lean towards something, development acceleration to maximize sales with associated increased development costs is better than minimizing development costs by extending the schedule.

    Furthermore, the results reveal that the cost penalty of sales maximization is smaller than the sales loss of development costs minimization. An important implication of the results is that, to determine the optimal development time, teams need to distinguish between cost and sales effects of development time reductions.

     I have a feeling that this result may have may have other underlying causes like extending development schedule to reduce costs might demoralize teams or increase defects…