Globalization Penalty

Ford recently announced it is investing $1B for a new plant in India (U.S. automakers in race for Indian market – The Washington Post). Ford is going to change its strategy for the region and design new products designed specifically for the market:

This new focus on India has required something of a philosophical shift for America’s big auto manufacturers, a post-downturn realization that the old ways of doing business no longer guarantee success, said Michael Dunne, president of Dunne and Co., a Hong-Kong based investment advisory firm specializing in Asia’s car markets.
In the past, U.S. carmakers tended to launch products in emerging markets that were successful in Europe “and anticipate that customers will trade up to the higher price level,” Dunne said.”

The article points out that the Indian market is going to grow at more than 12% per year and Ford needs to ensure a share of that market.  The effort to reduce product costs while still delivering reasonable products might even have some benefits to the core business from reverse innovation or frugal engineering.
As we have discussed recently, the drive to reduce cost might even drive innovation in general.  However, there is some penalty to globalization as pointed out by McKinsey Quarterly (Understanding your ‘globalization penalty:

The rapid growth of emerging markets is providing fresh impetus for companies to become ever more global in scope. Deep experience in other international markets means that many companies know globalization’s potential benefits—which include accessing new markets and talent pools and capturing economies of scale—as well as a number of risks: creeping complexity, culture clashes, and vigorous responses from local competitors, to name just a few.

The article analyzes data from hundreds of thousands of employees to arrive at three major risks of globalization: 1) Dilution of company vision / focus, 2) Reduced innovation and learning and 3) Ineffective collaboration and partnership between locations.

Clearly, having a larger product portfolio increases complexity, which reduces focus, dilutes vision and increases risks to overall performance.

Complicating matters further, our interviews suggested that, for most companies, about 30 to 40 percent of existing internal networks and linkages are ineffective for managing global–local trade-offs and instead just add costs and complexity. 

We have also seen that modularity can actually negatively impact innovation.  We have discussed some approaches to address global product development:

Managing culture / training across a diverse organization is clearly more difficult which might account for the lack of learning.  May be organizations need to improve global talent management? Or R&D managers can get involved to increase creativity.  The article finds that motivation for team members actually improved under globalization. This is counter to some of the work from Wharton which suggest that motivation reduces as team size increases.  May be rivalry actually improves performance?

The other problem may be with collaboration between multicultural teams:

Many companies, for example, can’t identify transferable lessons about low-income consumers in one high-growth emerging market and apply them in another. Some struggle to coalesce rapidly around market-specific responses when local entrants undermine traditional business models and disrupt previously successful strategies.

On the other hand, research shows that very little dispersion is needed before a team becomes virtual.  Since the teams are likely virtual to start with, the only major challenge would be different cultures.  We have discussed several approaches to improve multicultural team performance (here and here).


Innovation at Bell Labs (The Idea Factory)

A couple of articles in the Oil and Glory blog describe innovation at Bell Labs (Book Review: Jon Gertner’s “The Idea Factory” and What Obama could learn from Bell Labs).  Bell Labs, as the article points out, delivered many of the innovations that made modern devices possible:

“The name Bell Labs is synonymous with cutting-edge invention, winning seven Nobel Prizes (including by Energy Secretary Steven Chu) and turning out world-changing inventions like the transistor (pictured above), the silicon photovoltaic solar cell and radio astronomy. 

It is interesting to see that even fifty years back, Bell Labs had a clear understanding that innovation requires new technology and manufacturing processes integrated into a system that provides concrete benefits for the  user:

“It is not just the discovery of new phenomena, nor the development of a new product or manufacturing technique, nor the creation of a new market. Rather the process is all these things acting together in an integrated way toward a common industrial goal,” he quotes Jack Morton, a Bell Labs engineer.”

Even the leadership had a definition of what innovation meant that could be easily communicated. As we have discussed in the past, if the leaders do not know what innovation is, how are they going to encourage it?:

At Bell Labs, Mervin Kelly’s shorthand definition of innovation was something that is “better, or cheaper, or both.” If this succeeds, it will certainly fall into that category.

They even realized that the key to innovation is the ability to effectively address technological complexity and then mask it in the user experience.

“One of the more intriguing attributes of the Bell System was that an apparent simplicity — just pick up the phone and dial — hid its fiendish and increasing interior complexity,” Gertner writes. 

So, what made innovation happen at Bell Labs?  The most important factor was vast resources (probably funded by AT&T profits in addition to limitless government funding).  These resources meant a large and brilliant work force had the freedom to pursue many problems.  More importantly, they had a never ending stream of challenges that focused innovation:

Structurally, what defined Bell Labs was a large, brilliant, interdisciplinary work force that was supplied with freedom and vast resources and a never-ending stream of technical problems within the phone system that drew on the staff’s expertise. 

As we have discussed in the past, innovation happens at the intersection of technologies.  The Bell Labs model encouraged informal interactions between multiple disciplines and the abundance of resources facilitated experimentation:

In Bell Labs’ old days, an informal exchange of ideas (over lunch, during a stroll in the hallways, and so forth) was part of the innovation process. At universities and research institutions everywhere, it still is.

Furthermore, business processes were flexible enough to allow a variety organizational structures to nurture innovation in multiple ways – any thing from three person groups to very large teams:

With an invention like the transistor, Bell Labs used an orchestrated effort and a mid-sized team; but the silicon solar cell was quite different. Indeed, the latter breakthrough was serendipitous: Three men, each working in different buildings, somehow connected the right technology with the right problem at the right time. Meanwhile, later innovations such as cellular phone networks and the development of fiber optic systems required vast teams of hundreds of people. I think all these approaches — perhaps with the exception of the solar cell — were quite targeted, and are thus still viable today. 

So what can we learn from Bell Labs?  The author is uncertain.  I think we would be hard pressed to show a business case for the level of investment.  It is true that a lot of great innovations came out of the organization. However, we tend to forget major failures.

More important, perhaps, was that the Labs management at times made big errors in judging what technologies to pursue for the future. In my book I focus on two in particular: the waveguide and the Picturephone. 

Also, it is easy to forget that not everyone working at the lab was an innovator and the management really knew how to enable success.

And I think that’s a mistake. Bell Labs was not a great experience for everyone employed there; there were internal politics, personality clashes, miscommunications, and every other problem that affects a big organization. 

Much more importantly, the world has changed quite a bit in the last few decades and the idea of a walled garden for innovation probably will not be successful in the current environment.

Research efforts are expected to move faster today, and there seems to be a lower tolerance for failure, especially if any public funding is involved. Also, an ability (or willingness) to invest for the distant future, and to thus work with a new technology through an arduous and expensive development process, seems to be in shorter supply. 


Courier: R&D Planning & Portfolio Management at Microsoft

I have been meaning to write about development and cancellation of Courier, an innovative tablet concept from  Microsoft.  The c|net article on the subject provides quite a bit of useful information – both about innovation management best practices and some opportunities for improvement.  Courier was developed at Microsoft’s Skunkworks (Pioneer Studios).  They invested quite a bit of resources in the concept (130 employees and $25M in funding).  The concept was very well received (See Courier: First Details of Microsoft’s Secret Tablet in Gizmodo):

It feels like the whole world is holding its breath for the Apple tablet. But maybe we’ve all been dreaming about the wrong device. This is Courier, Microsoft’s astonishing take on the tablet.

However, they had to cancel the product because it did not fit into Microsoft’s product portfolio (See Microsoft confirms, kills Courier in one fell swoop — Engadget):

Well this is depressing. Word has just gone fluttering out of Redmond that work on the Courier project — a heretofore rumored dual-screen tablet which rightfully set the tech world ablaze — has been spun down by the company.

It is unclear which, if any, technologies developed as part of the innovation project ever got transitioned into the rest of the portfolio.  The cancellation led to significant organizational strife and hard feelings.  I think R&D managers can learn a lot from this event.

Courier’s death also offers a detailed look into Microsoft’s Darwinian approach to product development and the balancing act between protecting its old product franchises and creating new ones. The company, with 90,000 employees, has plenty of brilliant minds that can come up with revolutionary approaches to computing. But sometimes, their creativity is stalled by process, subsumed in other products, or even sacrificed to protect the company’s Windows and Office empires.

So lets dig in…
As we have discussed in the past (here and here), Microsoft’s portfolio process seems to be driven by senior executive champions. In case of tablets, there were two competing groups led by two senior executives working on competing products.

One group, led by Xbox godfather J Allard, was pushing for a sleek, two-screen tablet called the Courier that users controlled with their finger or a pen. But it had a problem: It was running a modified version of Windows.
That ran headlong into the vision of tablet computing laid out by Steven Sinofsky, the head of Microsoft’s Windows division. Sinofsky was wary of any product–let alone one from inside Microsoft’s walls–that threatened the foundation of Microsoft’s flagship operating system. But Sinofsky’s tablet-friendly version of Windows was more than two years away.

The senior executive ownership has some benefits: They get to ensure the product received the right kind of focus and resources to get it to market.  The approach may help overcome the valley of death in innovation maturation.  However, it also a key disadvantage: disconnected and conflicting projects in the R&D portfolio:

The Courier group wasn’t interested in replicating Windows on a tablet. The team wanted to create a new approach to computing.

The two lines of R&D were somewhat incompatible and underlying culture of executive champions prevented  integrated portfolio management.  Microsoft’s CEO, Steve Ballmer, had to call in Bill Gates to determine the path forward.  Gates did a product review and did not come out in favor of the new innovation (because of how far it was from the traditional Windows/Office business model):

“This is where Bill had an allergic reaction,” said one Courier worker who talked with an attendee of the meeting. As is his style in product reviews, Gates pressed Allard, challenging the logic of the approach.

Within a few weeks, Courier was cancelled because the product didn’t clearly align with the company’s Windows and Office franchises, according to sources.

The cancellation had a significant immediate impact on Microsoft’s business:

Rather than creating a touch computing device that might well have launched within a few months of Apple’s iPad, which debuted in April 2010, Microsoft management chose a strategy that’s forcing it to come from behind. The company cancelled Courier within a few weeks of the iPad’s launch.

Furthermore, the move away from innovation had a long-term impact on the product development cycle and the product portfolio at Microsoft:

But using Windows as the operating system for tablets also implies that Microsoft will update the devices’ operating systems on the Windows time frame, typically every three years. Compare that to Apple, which seems likely to continue to update the iPad annually, a tactic that drives a raft of new sales each time a new generation hits the market. By the time Windows 8 rolls out, Apple will likely have introduced its iPad 3. Moreover, Amazon’s much anticipated Kindle Fire tablet, which goes on sale November 15, will have nearly a year head start on the Windows-powered tablet offerings.

So what if anything could have been done differently and what can we learn from this?  First, many companies try to overcome the bureaucracy of a large organization by creating skunkworks (See Nokia).  The idea was similar at Microsoft:

The gadget was the creation of Allard’s skunkworks design operation Pioneer Studios and Alchemie Ventures, a research lab that also reported to Allard. (The lab took the German spelling of “alchemy” to highlight the stereotypical Teutonic traits of structure and regiment it hoped to bring to its innovation process.)

However, Skunkworks like environments are hard to integrate into the overall culture.  They tend to become quite segregated causing many of the innovations wither on the vine:

Allard created a fantasyland inside Microsoft where Apple fanboys could tinker on stylish products that would never see the light of day. They point to the opulent 36,000-square foot office of Pioneer Studios, headquartered in Seattle’s Pioneer Square, that featured huge open spaces, dotted with cushy Eames lounge chairs, angular white desks, blond wood floors, and exposed brick walls. It may have been 16 miles from Microsoft’s far more corporate Redmond, Wash., campus, but it was a galaxy away in terms of workplace design.

Clearly, Pioneer studios had envisaged this scenario and tried to form project networks that brought innovation cultures to the rest of the company:

He encouraged employees to seek out new colleagues with diverse backgrounds who could challenge Microsoft’s conventions and push the company to approach new opportunities in different ways.

Microsoft made an effort to implement a structured innovation management process:

Allard created Alchemie to focus on innovation process to make sure that the efforts of Pioneer were not scattershot. It studied best practices, both within and outside Microsoft, to “design a repeatable, predictable and measurable approach for building new business” 

Additionally, they integrated some cutting-edge innovation management practices such as clear timeline for technology insertion and a stage gate process to ensure the innovation projects do not spin to far from reality:

In fact, one of the mandates of Alchemie was to look only at product ideas and business concepts that were no farther than three years into the future. The Alchemie book includes something of an innovation process road map that lays out four “gates” that ideas needed to pass through to move from incubation to product development. And a source said that Courier had made it through all four gates.

Another interesting concept they implemented was clearly defined purpose and freedom to explore new solutions:

“Infuse them with our purpose,” Allard wrote. “Give them the tools. Give them lots of rope. Learn from them. Support where they take you. Invite them to redefine The Tribe.”

The Courier team also had a well defined mission – Free Create – that further focus development:

The phrase at the core of the Courier mission was “Free Create.” It was meant to describe the notion of eliminating the processes and protocols that productivity software often imposes on workers.

The idea of Free Create was imbued into the entire development process – which is a great idea.  Not sure of the business case for traveling to Milan to understand Moleskine…

The metaphor they used was “digital Moleskine,” a nod to the leather-bound notebooks favored in the design world. In fact, according to a few team members, a small group led by Petschnigg flew to Milan, Italy, to pick the brains of the designers at Moleskine to understand how they’ve been able to create such loyal customers.

One more interesting concept about Innovation Management was implemented: Disconnected prototypes allowing different subsystems to mature separately. This approach is advantageous in that it allows more experimentation and we have seen that experiments boost productivity.  Steve Jobs followed a similar approach when developing the iPhone.

When Courier died, there was not a single prototype that contained all of the attributes of the vision: the industrial design, the screen performance, the software experience, the correct weight, and the battery life. Those existed individually, created in parallel to keep the development process moving quickly. Those prototypes wouldn’t have come together into a single unit until very late in the development process, perhaps weeks before manufacturing, which is common for cutting-edge consumer electronics design. But on the team, there was little doubt that they were moving quickly toward that final prototype.

It appears that the Courier team made significant progress (and used significant resources along the way):

Courier was much more than a clever vision. The team, which had more than 130 Microsoft employees contributing to it, had created several prototypes that gave a clear sense about the type of experience users would get.
It’s clear there were substantial resources behind the effort. The commemorative book, designed to resemble the journal-like look of the Courier, lists the 134 employees who contributed to the gadget’s creation. Moreover, Petschnigg writes on his LinkedIn profile page that he “managed $3.5 (million) seed funding, (and) secured $20 (million) to develop this new product category.”

However, there was a clear lack of coordination at the product portfolio level and there were no processes to align development plans across different product lines or R&D projects:

Early on, the group opted to use Windows for Courier’s operating system. But it wasn’t a version of Windows that any consumer would recognize. The Courier team tweaked the operating system to make sure it could perform at high levels with touch- and pen-based computing. What’s more, the graphical shell of Windows–the interface that computer users associate with the operating system–was entirely removed. So while it was Windows under the hood, the home screens bore zero resemblance to the familiar PC desktop.

This is a key problem with the Skunkworks innovation concept.  A separate culture quickly becomes insular and product lines divergences can not be reconciled:

“A big lesson is that it may be easier to go into your quiet space and incubate. But when you want to get bigger and get more resources, you want to make sure you’re aligned,” a Courier team member said. “If you get Sinofsky on board from the start, you’re probably going to market.”

So the challenge again appears to be with Microsoft’s R&D planning and portfolio management process.  It is relatively easy to become innovative (may be not $25M, but at least to some level), however, it is not easy to align product portfolios to bring innovation to market:

For Courier to come to life, the team creating it would have to convince the Microsoft brass that the device would offer the company substantial opportunities that Windows 8 could not. In the end, that proved to be too large a hurdle for J Allard, Courier’s leader and Microsoft’s chief consumer technology visionary. 

One way to address this challenge is to have more detailed R&D plans that can be shared and linked across different product lines.  These plans could have allowed teams to decide how they can bring different development paths together over time without an outright cancellation of Courier.  Well communicated plans and roadmaps could have facilitated collaboration between Courier and Windows 8 teams.  This collaboration could have ensured that more of the technologies developed under courier could have been integrated into Windows 8.  This unfortunately did not happen.

It’s unclear what, if any, pieces of the Courier technology are finding their way into other Microsoft products.

The only way any new innovation got introduced to Microsoft was through unmanaged diffusion:

Courier team members scattered. Many moved on to other products at Microsoft, such as Xbox, Windows Phone, and Bing.Others are involved with different incubation efforts at the company. 

A final lesson could be better portfolio management processes such as more frequent portfolio reviews where executives could have either reconciled development plans or eliminated the project before significant resources and emotions were invested:

And a few employees who contributed to the product’s development have left the company altogether, joining other tech firms such as Amazon, Zynga, and Facebook.


Rethinking knowledge management for R&D teams

As we know, R&D is focused on generating knowledge.  Unlike manufacturing, where the outcome is products, R&D generates knowledge about how to build those products.  Hence R&D workers are all by definition knowledge workers. The article Rethinking knowledge work: A strategic approach from McKinsey Quarterly has a very thorough discussion about IT tools needed to help improve the productivity of knowledge workers.

In the half-century since Peter Drucker coined the term “knowledge workers,” their share of the workforce has steadily grown—and so has the range of technology tools aimed at boosting their productivity. Yet there’s little evidence that massive spending on personal computing, productivity software, knowledge-management systems, and much else has moved the needle. What’s more, a wide variety of recent research has begun suggesting that always-on, multitasking work environments are so distracting that they are sapping productivity.

As we can all relate, what information is provided to R&D teams is far more important than how much.  In fact, we need to reduce the information overload.

It’s time for companies to develop a strategy for knowledge work—one that not only provides a clearer view of the types of information that workers need to do their jobs but also recognizes that the application of technology across the organization must vary considerably, according to the tasks different knowledge workers perform.

The article defines two approaches for providing knowledge (information) to R&D teams: 1) Free access where team members have access to the entire knowledgebase and hopefully select the information they need and 2) Structured access where information is prefiltered for the team member.

Few executives realize that there are two divergent paths for improving access to the information that lies at the core of knowledge work. The most common approach, giving knowledge workers free access to a wide variety of tools and information resources, presumes that these employees will determine their own work processes and needs. The other, the structured provision of information and knowledge, involves delivering them to employees within a well-defined context of tasks and deliverables. Computers send batches of work to employees and provide the information needed to do it.

Free Access is the model employed by most R&D organizations because of their ease of implementation:

The information technology behind the free-access model is relatively easy to implement. The Internet and social media are readily accessible to anyone, and access to third-party databases is possible with any Web browser—although closed company cultures sometimes impede knowledge sharing.

Clearly, most R&D workers know are knowledgeable and know what information they want.  However, the problem with free access is the volume of information that one obtains when one starts looking for knowledge.

The problems of free access are fairly obvious: while workers may know how to use technology tools, they may not be skilled at searching for, using, or sharing the knowledge. One survey revealed that over a quarter of a typical knowledge worker’s time is spent searching for information. Another found that only 16 percent of the content within typical businesses is posted to locations where other workers can access it. Most knowledge workers haven’t been trained in search or knowledge management and have an incomplete understanding of how to use data sources and analytical tools.

The problem of searching for relevant information is exasperated even more in the R&D environment.  An employee searching for thermal cracking problems in an engine block will find all documents that have the words thermal and cracking.  Even when narrowed down, thermal cracking may be related to very different mechanisms.  The employee will likely give up the search and start working from scratch instead of digging through voluminous design documents.  This was a common problem I faced when I was trying to investigate failure modes in past systems to generate more robust designs.  A key answer would be to structure and filter the knowledge so that only the relevant information is displayed.

“Structured-provision technologies first appeared in the early 1990s and have improved considerably of late. They often have a range of functions. The most important is workflow technology that controls how knowledge workers get information and job tasks. These workers may encounter supporting technologies that include information portals, business rules or algorithms to automate decisions, document- or content-management systems, business process management-and-monitoring systems, and collaboration tools. Increasingly modular component designs make these technologies easier to deploy.”

Structured access has had some success in simple knowledge work like mortgage application processing or insurance claims processing:

Productivity is the major benefit: as measured by the completion of key tasks per unit of work time, it often rises by 50 percent when organizations implement these technologies. One automobile-leasing company, for example, achieved such gains after it implemented a new system for lease processing and end-of-lease sale offers. The reason for the improvement was that workers had few distractions and spent no time searching for information.

The key disadvantage of structured access is that by definition it reduces direct interaction between workers.  Furthermore, it requires a well defined process that can be automated to structure the knowledge.

In structured information environments, computer systems rather than knowledge workers integrate the work, so extensive system and process design is required up front for implementation. While these systems can be tailored to fit complex business processes, that kind of tight fit can become a problem if business environments or processes change.

This is easy to do in simple tasks but very difficult to do for complex R&D.  Some work has been done in structuring interactions for systems engineering requirements management.  However, I am not sure of any tool that can structure access for R&D environment (beyond those developed by my firm InspiRD.)   The article provides a useful framework to analyze what type of access would be beneficial in which environment.

R&D clearly falls under the top right corner of this 2×2,.  I would contend that even under that scenario, some amount of structure is absolutely critical to effectiveness.  In fact, we need a hybrid approach where the IT systems filters and narrows the search results for the team member.  It then provides free access only to the relevant information so that the R&D team member can reuse past development.

Another way of smoothing the path to structure is letting knowledge workers use familiar, typically free-access tools when they interact with a structured system. To alert them when it’s time to use a structured application, for example, have it send them an e-mail. If a structured task requires, say, passing financial information to and from the system, let workers use a spreadsheet. Always remember: high-end knowledge workers don’t want to spend all their working hours interacting with automated tools.

Limiting choices in a hybrid approach, if implemented correctly can actually enhance collaboration and interaction.  If team members are overwhelmed by the amount of information and start redeveloping technologies, free access will reduce interactions – not enhance collaboration. By limiting choices, we might be able to encourage R&D teams to engage in productive purpose driven communication and build networks.

We live in a world where knowledge-based work is expanding rapidly. So is the application of technology to almost every business process and job. But to date, high-end knowledge workers have largely remained free to use only the technology they personally find useful. It’s time to think about how to make them more productive by imposing a bit more structure. This combination of technology and structure, along with a bit of managerial discretion in applying them to knowledge work, may well produce a revolution in the jobs that cost and matter the most to contemporary organizations.


An example of a good R&D plan

As we have discussed in the past, R&D management is challenging because most new products require many technologies to mature simultaneously and many engineering disciplines to work together. The only real answer to effective R&D management is effective R&D plans.  R&D planning remains very hard and we have been discussing some approaches to address them.

  1. Good R&D plans have multiple milestones with clearly defined objectives at System AND Technology level.  These milestones bring constituent technologies together to evaluate / guide integration.  
  2. Good plans drive reuse of development between various development projects to reduce development costs and improve efficiency.  
  3. Good plans have multiple points of insertion from technologies into delivered products – i.e. Different subsystems from different development projects mature at different times and get inserted into delivered products.  These multiple insertion paths reduce long-term risks and improve return on investment.

I have been looking for good examples of effective R&D plans.  The article Mitsubishi Integrates Inverter With EV Motor System from Tech On discusses demonstration of a new product under development:

“Mitsubishi Electric Corp developed a motor system whose output power is more than 70kW for electric vehicles (EVs) by integrating an inverter and a motor on the same axis.”

This integration has many benefits including reduced volume, reduced weight and improved installation among others.

The integration enabled to shorten electric lines between the inverter and motor as well as to integrate pipes for water cooling that are required for each of the inverter and motor in the old system.
The mass of the new system is about 10% less than that of the old one. And the total efficiency of the new system is 3-5 points higher than that of the old system under the JC08 test mode.

This demonstrates one aspect of a good R&D plan: Clearly delineated objectives and goals.  These goals should be measurable so that progress can be evaluated at multiple points along the development pipeline. The company plans to commercialize the system only in 2017.  However, they are demonstrating some of the capabilities in the integrated system in 2012!  It is important to address integration challenges early and not wait till technology development is complete.

It is also important to identify major development hurdles and clearly define targets for technology development.  In this case, the company has identified heat from the inverter as the key challenge and identified multiple technology development paths to address it.  This clarity drives innovation:

Because the motor and inverter generate a large amount of heat, the company not only increased cooling capability but also made improvements to each of the motor and inverter to reduce heat generation. Specifically, it changed the magnetic design of the motor and employed a silicon carbide (SiC)-based power device for the inverter. With the SiC-based power device, the loss of the inverter was reduced by half, compared with the inverter of the old system that uses a silicon (Si)-based power device.

Since development of power devices is expensive, they have insertions of the SiC devices before the final system delivery.  Multiple insertion paths reduce the risk of wasted development effort:

The company aims to commercialize the system in 2017. And it plans to commercialize an EV motor system whose inverter using a Si-based power device and motor are separated in 2014.

Finally, there are incremental objectives for development at each stage, further enhancing management’s ability to monitor and guide R&D:

Currently, the motor system can be used for rotating tires and for simulated driving based on actual driving patterns in a laboratory. To commercialize the system, it is necessary to improve its structure for volume production, fine-tune it and further reduce its weight by 10 to 20%, Mitsubishi Electric said.


The Influence of Prior Industry Affiliation on Framing in Nascent Industries

A very useful paper from the HBS Working Knowledge about The Influence of Prior Industry Affiliation on Framing in Nascent Industries explores the digital camera market to identify some useful trends in firms entering new markets:

New industries sparked by technological change are characterized by high uncertainty. In this paper we explore how a firm’s conceptualization of products in this context, as reflected by product feature choices, is influenced by prior industry affiliation. We study digital cameras introduced from 1991 to 2006 by firms from three prior industries.

The paper hypothesizes that firms entering new industries tend to continue to behave like the industry from which they originate. A unique perspective and one that can be useful for all of us to understand because the corporate mindset is critical to how products get launched.

We hypothesize and find first, that prior industry experience shapes a set of shared beliefs resulting in similar and concurrent firm behavior; second, that firms notice and imitate the behaviors of firms from the same prior industry; and third, that as firms gain experience with particular features, the influence of prior industry decreases. This study extends previous research on firm entry into new domains by examining heterogeneity in firms’ framing and feature-level entry choices.

Let us dig to see what we can learn…
R&D has to always address uncertainty when developing new products. We have to experiment with product configurations, functions and technologies. However, new industries are even more challenging:

Potential customers have little or no experience with products, and their preferences are therefore unformed and unarticulated. Even basic assumptions about what the product is and how it should be used are subject to debate. Similarly, from a technological perspective, uncertainty exists about the rate of performance improvement of the new technology, how components of a technological system will interact, and whether different technological variants will work at all. Market and technological uncertainty are often compounded by competitive uncertainty as firms grapple with shifting industry boundaries and the convergence of firms from previously distinct domains.

The paper intends to analyze and explain how different firms decide to enter new markets and what drives them to be different from each other (heterogeneity). The digital camera industry studied by the authors is quite appropriate because it was at the confluence of multiple technologies / markets:

…the emergence of consumer digital cameras was characterized by high uncertainty and the entry of firms from three prior industries, photography, computing, and consumer electronics, enabling a comparison of the influence of firm background on decisions about which features a digital camera should include.

This interesting.  Digital cameras needed expertise from many different segments: Image Sensors (semiconductor), Optics, Digital Processing, Displays, User Experience (how a camera takes pictures – forte of vendors such as Nikon), film (Kodak) and consumer electronics (including mobile phones).  Market entrant from each participating industry segment approached the market based on their predispositions:

We find that prior industry affiliation had a significant influence on a firm’s initial framing of the nascent product market. Qualitative data indicate that digital camera product concepts and expected uses varied systematically, ranging from an analog camera substitute (photography firms), to a video system component (consumer electronics firms), to a PC peripheral (computing firms) before converging on a product concept that included elements of all three frames.

Also, different entrants from the same industry focused on similar products (based on their prior belief). However, as participants gained more experience with a particular product, they moved away from behavior corresponding to their previous industry – following a three stage model including an era of ferment, convergence on a dominant design, and an era of incremental change:

Our results suggest that firms from the same prior industry shared similar beliefs about what consumers would value as reflected in their concurrent introduction of features — firms were significantly more likely to introduce a feature, such as optical zoom, to the extent that other firms from the same prior industry entered with the feature in the same year, whereas concurrent entry by firms from different prior industries had no influence. Firms were also likely to imitate the behavior of firms from the same prior industry, as opposed to that of firms from different prior industries in introducing some, but not all features. Finally, we find that as a firm’s experience with a particular feature increased, the influence of prior industry decreased.

The paper suggest that industry level (or at least multi-participant) beliefs are important because they tend to shape the industry and the competitive landscape. Sometimes inability to develop all product features allows new entrants in the market. For example, few firms were able to integrate digital cameras with GPS locations  and provide a new user experiences.  It took Apple to combine a touch screen display with a media player in a mobile phone. In new industries R&D managers lack a detailed understanding of customer preferences (they have not evolved yet) and hence the prior experience becomes even more important. May be we should focus on thematic similarities a bit more to address this competitive weakness in traditional R&D management models. An approach focused on how customer would use the product and its features would help the exploration of thematic similarity (may be we can learn from Steve Jobs) .


CEO says Ford won’t back off R&D spending

I have been gathering data about corporate response to difficult market conditions, especially the impact on R&D spending.  Tough times impact every aspect of an organization’s operations and they have changed R&D spending as well (reduce focus on long-term R&D).  Even so, organizations tend to fight to maintain R&D spending levels.  We have seen that CEO of companies such as 3M have maintained R&D spending despite the downturn. Here is another data point from the Marketwatch post CEO says Ford won’t back off R&D spending:

Ford Motor Co. CEO Alan Mulally said Tuesday at the Geneva Motor Show that the auto maker will focus not on forging further alliances in Europe to help drive growth but on continuing to invest heavily in new products. “We have never backed off, even through this entire recession,” he said. “We actually have increased investment in our new vehicles during the toughest of times.

As a background, the European slowdown is likely to lead to a $0.6B loss in Ford’s European operations (Ford launches B-Max subcompact – seattlepi.com):

Ford will focus on cost containment to return to profitability until demand is restored, but he declined to speculate on possible measures. Booth said Ford Europe could lose $500 to $600 million dollars this year, after recording losses of $190 million in the last quarter of 2011.

Interestingly, the cost cuts are going to be in manufacturing operations rather than R&D – especially since R&D has probably more flexibility.  Even more importantly, we have discussed many times that how you spend on R&D is far more important than how much.  In fact, many leaders such as CTOs of Texas Instruments and Pfizer have found that R&D cost cuts actually improved results!
The effort to maintain budgets is even more surprising in light of the fact that surveys show most R&D executives do not see R&D as driver of innovation.  May be some of these CxO statements are for public relations perspective, but still important to understand.

The second important point Mr. Mullaly makes is that Ford will not form R&D alliances.  Sharing R&D across multiple companies is a simple way to reduce R&D costs near-term.  Here is another article from MarketWatch discussion R&D alliances (BMW, GM still talking over technology cooperation):

BMW AG Chief Executive Norbert Reithofer confirmed Tuesday that the German car maker’s cooperation projects with PSA Peugeot Citroen remain unaffected after the French peer last week forged an alliance with General Motors Co.. He added that cooperation talks between BMW and GM over “future technologies” such as fuel cells are still ongoing, but declined to elaborate.

As we have discussed in the past, automotive companies make a complex web of alliances.  May be the Ford approach has some value considering the difficulty and cost of managing these alliances and maintaining IP rights across them.


How can R&D Management help exploit Thematic Similarity?

The article In Praise of Dissimilarity from MIT Sloan Management Review has very important implications for R&D management.  The article describes how most managers view similarity based on functionality or product taxonomy (e.g solid state drives and hard drives are similar).  However, another way to look for similarity is based on how different products interact in a scenario or event (e.g. shoes and mp3 players are related through exercise).  This is called thematic similarity.  The article points out that thematic similarity can help focus innovation and provide a competitive advantage.  However, it also raises some important challenges for R&D management.  Lets dig in.
Traditionally similarity (taxonomic similarity) has been seen as a that based on overlap of functions and features:

Whether explicitly or implicitly, the traditional understanding of “similarity” by managers has been a taxonomic one. Simply put, the degree of similarity as traditionally measured depends on the extent to which two objects possess the same features. Personal computers, for instance, all have hard drives, processors and a video monitor.

Thus, taxonomic similarity is based on the properties of the objects themselves, and taxonomic categories cohere around shared internal properties. As a consequence, taxonomically related concepts tend to resemble one another.

Thematic similarity is probably as important but often overlooked:

…similarity is not just a matter of degree (how similar are two things), but also of kind (how are two things similar). Two things are thematically similar if they functionally interact in the same scenario or event. For example, an athletic shoe and an MP3 player are related through interacting in a workout theme, coffee and a computer interact in an office theme and a navigation system and a motor via an automobile theme. In each of these cases, the two things perform different roles.

However, managers are trained to focus on taxonomic similarity and hence prone to ignore thematic uncertainty:

When managers ignore the thematic similarity hidden behind taxonomic dissimilarity, they risk overlooking opportunity (as well as misdiagnosing threat).

The behavioral theory of the business enterprise has long acknowledged managers’ dangerous tendency to search for opportunity in familiar taxonomic domains.

The benefits of thinking thematically are pretty significant:

Thematic similarity opens up a new area of the dissimilarity space. While Google Maps and Yellow Pages are taxonomically similar services, another Google service, Google Voice Search, and GPS are clearly in taxonomically dissimilar categories. And yet there is a thematic similarity between the two in the context of using cell phones.

Hence themes can actually help focus and direct long-term R&D and innovation:

The new area of thematic similarity holds particular promise for innovation and opportunity search. Focusing on areas of taxonomic dissimilarity can help managers identify novel products or services that result from the combination of strategic assets that are taxonomically dissimilar but thematically related.

As we have discussed many times, innovation occurs at the intersection of technologies.  The more dissimilar the underlying technologies, more disruptive the innovation is likely to be.  Thematic similarity provides a framework to bring normally dissimilar technologies together – and hence drive innovation:

This distant (in taxonomic terms) yet close search for opportunities created by thematic similarity provides a pragmatic guide to how (in which domains) strategists can find new potential for competitive advantage.

The underlying problem is that R&D management processes and cultures are developed around taxonomic similarity:

Taxonomic similarity underlies key frameworks of management such as strategic relatedness, the Standard Industry Classification (SIC) system, the definition of industry boundaries, including the forces within that industry, and the International Patent Classification (IPC). For example, the IPC category F02 (combustion engines) contains internal-combustion piston engines, gas-turbine plants, jet-propulsion plants and so on.

May be we can extend some of the traditional tools such as brainstorming and focus them around themes:

Methods such as brainstorming, which aim at identifying such distant domains, are often referred to in the general management literature. For example, in an attempt to move beyond mere product extension, companies often encourage their developers to think “outside the box”

But true exploitation of thematic similarity will require management innovation. We will need to develop new tools and processes to decide which thematic similarity to explore and how much to invest in it.  One example provided by the article focuses on the integration of GPS technology with cameras. Thinking thematically, this would be pretty straight forward marriage.  However, in reality, this very hard to do.  The skills necessary to design cameras are very different form those required to design GPS receivers.  Even if we can get the two technologists to brainstorm together, actual collaboration though workshops would be rather difficult.  For managers, resource allocation for such development would be even more difficult.  One approach would be to have detailed roadmaps that can be used to engender purpose driven communication between the two groups and portfolio balancing processes that effectively allocate resources for such activity.

…consider an extreme case in which two products are so strongly associated that they are combined in one product but not thematically integrated. Many cell phones sport a camera function and a GPS function. However, the GPS and camera functions have not been integrated in most phones, despite sharing a thematic similarity: Many photos are about places, just as GPS is about places. Thematic integration links these two functions, allowing users to “geotag” the location at which a photo is taken.

Another advantage of exploring thematic uncertainty is the ability to explore all potential competitors. For example, as the article points out Google did not see their business model as amenable to or at risk from social networking:

Google only openly acknowledged the threat posed by Facebook on November 1, 2007, when it launched Open Social, Google’s own social networking platform. In other words, Facebook remained a noncompetitor for Google for more than three years and six months after Facebook’s launch. In fact, Google managers actively dismissed Facebook precisely because it did not fit Google’s taxonomy of activities. Google CEO Eric Schmidt said, “We have address books, and the sum of our address books is the social graph.” And it was not until February 9, 2010, that Google acknowledged the thematic similarity between social networks and e-mail by making a determined foray into exploiting the integration of social networking and e-mail by launching Buzz, a networking service that was closely integrated with its e-mail offering, Gmail.

We will also need new strategic planning processes that can identify competitive threats from thematically similar firms.  More importantly, we will need a better approach to evaluate those threats and find effective ways to respond to those threats.  Finally, thematic similarity can be used to find acquisition targets.  The article points out that Intel believes it acquired McAfee based on thematic similarities.  The problem is that McAfee’sbusiness model is so different from Intel’s that integration of the two will take a very long time.

Actually, Intel and McAfee are remarkably similar thematically. According to Intel, the acquisition of McAfee would boost its strategy in mobile wireless, where it is beginning to produce chips for smart phones. Beyond smart phones, security is becoming a key requirement as new devices, from tablet computers and handsets to televisions and refrigerators, connect to the Internet. The purchase is therefore set to turn Intel, the world’s largest chip-maker, into a leader in security, extending its reach into Internet-connected devices.

While experts hope that chips can be improved to make them able to withstand malicious attacks, that prospect is seen as being years away.

Even with time, I am not sure how easy or valuable this integration will be.  May be there is a limit to how much taxonomically dissimilar firms can be before they can no longer be merged effectively.  Furthermore, if integration is going to take many years, can we actually forecast how the market place will function at that time?
A few more questions than answers, but still a very useful concept.


Apple’s R&D portfolio strategy – “Get Rid of the Crappy Stuff” (Continued)

I had been meaning to write about the article For the good of the company? Five Apple products Steve Jobs killed from Ars Technica:

When Steven P. Jobs returned to Apple 1997, he returned to a slew of ill-conceived product lines. Some were excessive, and some were downright silly, but many were ultimately killed off for their poor alignment with consumer needs and wants. Still, even with Jobs’ discerning eye, he wasn’t immune to having to deal with a few bad product decisions. 

We discussed the Jobs’ portfolio management methodology here. I had mentioned that it is hard to make the right decision about what is crap.  This prevents some leaders from making any decision at all.  The idea should be to find failures early before a significant investment has been made.  In fact, we should encourage some amount of risk taking in R&D organizations to ensure that we are somewhat pushing the boundaries.  The only way to ensure sufficient risks are taken is to see some projects fail and rewarding failure.  Even Steve Jobs occasionally made bad product decisions.  The only answer is to have a good risk management process in place to catch failures. We also want to make sure we learn something from each failure so we can improve decision making for the future. So, here is an example of a bad product decision by Jobs:

The Power Mac G4 Cube, a computer suspended in a clear plastic box, was designed by Jonathan Ive and released in July 2000. The Cube sported a 450MHz G4 processor, 20GB hard drive, and 64MB of RAM for $1,799, but no PCI slots or conventional audio outputs or inputs, favoring instead a USB amplifier and a set of Harman Kardon speakers. The machine was known in certain circles as Jobs’ baby.

While Apple hoped the computer would be a smash hit, few customers could see their way to buying the monitor-less Cube when the all-in-one iMac could be purchased for less, and a full-sized PowerMac G4 introduced a month later with the same specs could be had for $1,599. Apple attempted to re-price and re-spec the Cube in the following months, but Jobs ended up murdering one of his own darlings, suspending production of the model exactly one year after its release. While the Cube’s design is still revered (it’s part of the MoMA’s collection), it proved consumers won’t buy a product for its design alone.


Roadmaps as a foundation for effective R&D management (Part 1)

I am writing a paper on the use of R&D plans as a foundation for effective R&D management.  As a part of the effort, I am collecting prior research on R&D planning and roadmapping.  I plan to summarize some of the interesting papers I find along the way.  The first is from a roadmap seminar given by two MIT professors at Harvard Business School in 2004.  It provides a good background on some work done on longer-term technology planning and touches upon near-term product planning.

Roadmaps provide a framework for thinking about the future. They create a structure for strategic planning and development, for exploring potential development paths, and for ensuring that future goals are met.

One reason for developing roadmaps is to address many sources of uncertainty in the face of complexity:

One must weigh many sources of uncertainty and try to comprehend how a large number of complex and dynamic factors might interrelate and influence development of a process or a technology. … Roadmapping is not the only tool for this type of strategic planning, but it is practical and straightforward in its approach and gaining increased attention and usage.

The article lays out two types of roadmaps: Exploratory and Target Driven.

Exploratory roadmaps are what are sometimes called Technology Push roadmaps that are envisioning emerging technologies.  These roadmaps are used to “Push” technologies into products without there being a well defined need for the technology’s benefits:

Exploratory Mapping is used as a framework to explore emerging technologies and to examine potentially disruptive technologies. The process creates a map of the technology landscape by surveying possible future scenarios. There is not necessarily consensus on the technology or its evolution at this stage.

It appears that some of the leading work on exploratory roadmaps was done at Motorola:

“Roadmaps provide an extended look at the future of a chosen field of inquiry drawn from the collective knowledge and imagination of the groups and individuals driving change in that field. Roadmaps include statements of theories and trends, the formulation of models, identification of linkages among and within the sciences, identification of discontinuities and knowledge voids, and interpretation of investigations and experiments.” – Robert Galvin

Roadmap implementation is hard, and data shows that less than 10% of R&D organizations use roadmaps.  In my experience, exploratory roadmaps are the prevalent form of roadmaps implemented.  They are used more like a marketing document for the technologists to get continuing funding rather than a real planning document (More on this in a future post). The other form of roadmaps is to communicate products under development: Target Driven Roadmaps:

Target-Driven Roadmapping used to drive toward a specific technical target. The technology objective is clearly articulated and there is a level of consensus on what the targets should be. The roadmap serves to drive innovation and resources toward reaching that end goal.

These can sometimes be called as Technology Pull roadmaps – where different technologies are “pulled” forward to satisfy specific market needs.  Some work has also been done in Target Driven roadmaps.

“Typically based on strategic plan requirements, roadmaps incorporate product attributes and layout goals, development requirements, allocations priorities, and defined evolution plans for flagship or core products and platforms.- Strauss, Radnor & Peterson

Even so, the roadmaps are still used mainly for communication rather than as a foundation for R&D management:

The output of the technology roamapping process is typically a product-specific roadmap which, in simple visual representations of hardware, software and algorithm evolution, links customer-driven features and functions to specific clusters of technologies.” – Strauss, Radnor & Peterson

This is borne out by the article as well.  They suggest that

While the processes and outputs of these two types of roadmapping can vary significantly,
there are common elements. Roadmapping requires:
– a social and collaborative process;
– an analytical method of assessing and planning future development;
– a means of communicating using visual or graphic representations of key targets or goals as a function of time.

Clearly, roadmaps do provide a structured foundation for R&D collaboration.  Although the second bullet mentions an analytical method for assessing R&D, I am yet to come across an organization that uses roadmaps for that purpose.  In fact, very little of the article is dedicated to the second point.  The article focuses on social / collaborative use of roadmaps and outlines a workshops-based process to develop roadmaps.  This seems to have become the primary form of roadmapping.  In many organizations I have visited, roadmapping has a tendency to become a bureaucratic check box and is hardly ever used for driving innovation.  In fact, most of the benefits of true roadmapping process outlined in the article (and described below) are hardly ever achieved.

1. Establish a vision for the future.

Roadmaps can definitely communicate a vision and is a great benefit of roadmaps.

2. Encourage systems-thinking. A comprehensive roadmapping framework forces the roadmap participants to think about technology development within the context of a larger system and aids better understanding of the linkages among technology, policy, and industry dynamics.

This is where structured target driven roadmapping becomes important.  In most physical systems, this is hard to do in a workshop / social environment.  Product development plans are complex and require knowledge of tens (if not hundreds) of engineers.  Organizations need better roadmapping processes that places technology roadmaps in a system context.

3. Planning and coordination tool. Roadmaps align technologies and products with market demand by representing the co-evolution of technology and markets. Roadmaps can help in uncovering common technology needs within an organization, enabling the sharing and consolidation of R&D, supply-line and other common resources

This is probably the most important benefit of roadmaps.  However, as President Eisenhower said, “Plans are worthless, planning is everything.”  Most roadmaps are static, kept in PowerPoint documents and revisited once a year (at best).  Hardly an effective foundation for planning and coordination.

4. Accelerate innovation. Roadmapping provides a better understanding of the potential paths for innovation, helping to visualize new opportunities for future generations of product developments. 

This is the critical and often overlooked benefit of roadmaps.  Innovation happens at the intersection of technologies (not just one technology).  So, an iPhone requires capacitive touch screen, low power electronics and user interface (among others) to come together for innovation to be delivered to market.  Nokia for example had a touch screen phone years before iPhone, but could not bring it to market.  Not only do the technologies need to mature simultaneously, all the related engineers need to know what others are capable of doing with them.  Roadmaps can allow all team members to understand the projected state of other technologies and hence drive innovation.  Since the number of technologies involved in modern systems is quite large, the workshop-based roadmapping process described in the paper is probably not sufficient to drive innovation.

5. Communications. Within corporations, roadmaps can provide a crucial link between management teams, marketing, engineering and R&D – improving communications and providing a clear sense of near term and long term targets. 

Pretty self explanatory and some what related to point 1.

My thesis remains that R&D plans can actually become a foundation for effective R&D management and can do much more than the five benefits outlined above.  Plans can help optimize resource allocation.  R&D plans can be used to measure and guide R&D operations.  They can also be used to forecast skill-set needs.  However, that will require plans that are a bit more controlled than those developed primarily for communication. More on this soon…