Great data for Entrepreneurs

The article Perspective: Economic Conditions, Entrepreneurship, First-Product Development, and New Venture Success in Journal of Product Innovation Management studies 539 new ventures started between 1998 and 2001 and has the following great findings:

  1. Consistent with prior research, less than half of the 539 ventures survived more than two years. 
  2. Economic downturns lead to higher failure rates for new ventures. 
  3. New venture success is highly correlated with first-product success. 
  4. First-product success is enhanced when those products are introduced into markets with emerging market needs but with established industry standards. 
  5. First-product and venture performance are significantly higher for products based on ideas that came from the founders. 
  6. Most successful first products are based on ideas that reflect both technology development and an analysis of customer needs.
Take away for me: Starting a business is risky.  However, as long as it is based on my own ideas and I am strongly aware of / pay attention to customer needs, I should be OK!

R&D Collaborations and Product Innovation

The paper R&D Collaborations and Product Innovation in Journal of Product Innovation Management confirms some of the findings we discussed earlier in the week: It is good to collaborate with suppliers and not so good to develop products with customers. Specifically, this particular paper is based on R&D collaborations undertaken by a sample of 781 manufacturing firms during 1998–2002.  The paper finds that:

  1. Collaborations with suppliers have the highest positive impact on product innovation, followed by collaborations with universities. 
  2. R&D collaborations with customers do not appear to affect product innovation
  3. Collaborations with competitors appear to harm
  4. Positive influence of R&D collaborations with universities and suppliers is sustained over the long-term
  5. Negative influence of R&D collaborations with competitors is, fortunately, short-lived. 
Also, some specifics about quality of collaboration: 

Their findings indicate that ease of knowledge access, rather than breadth of knowledge, appears to drive the success of R&D collaborations for product innovation. R&D collaborations with suppliers or universities, which are characterized by relatively easy knowledge access, have a positive influence on product innovation, whereas R&D collaborations with customers or competitors, which are characterized by reduced ease in knowledge access, are not related or are even negatively related to product innovation.

More importantly, partners with a narrow knowledge-base (at least the part that is shared) is better for collaboration than otherwise.  This is similar to what we discussed in the paper on cross-function collaborations.

Moreover, to achieve product innovation with the help of R&D collaborations, it appears that the collaboration must first have mechanisms in place to facilitate the transfer of knowledge; once these are in place, it is better if the partner has a relatively narrow knowledge base. Thus, while R&D collaborations with both suppliers and universities are positively related to product innovation, the narrow knowledge base provided by collaborations with suppliers appears to have a larger positive impact on product innovation than the wider knowledge base provided by collaborations with universities


Performance measurement in R&D

Here is quick reference from the Journal R&D Management: Performance measurement in R&D: exploring the interplay between measurement objectives, dimensions of performance and contextual factors.  The overall learning is that industry and size and a big influencer on what metrics firms use.  Furthermore, overall goal for performance management also guides what metrics are used.

The results indicate that firms measure R&D performance with different purposes, i.e. motivate researchers and engineers, monitor the progress of activities, evaluate the profitability of R&D projects, favour coordination and communication and stimulate organisational learning. These objectives are pursued in clusters, and the importance firms attach to each cluster is influenced by the context (type of R&D, industry belonging, size) in which measurement takes place. Furthermore, a firm’s choice to measure R&D performance along a particular perspective (i.e. financial, customer, business processes or innovation and learning) is influenced by the classes of objectives (diagnostic, motivational or interactive) that are given higher priority.


High-Performance Product Management

The article High-Performance Product Management: The Impact of Structure, Process, Competencies, and Role Definition in the Journal of Product Innovation Management is interesting for many reasons.  The least of them is because it provides a very good history of research in product management.  More importantly, it combines qualitative interviews with factor analysis and maximum likelihood estimation to develop and test a model for improving performance in product management.

The paper identifies several key factors that potentially impact product management performance. A set of qualitative interviews is conducted to develop hypotheses related to constructs that may drive product management performance. These hypotheses are used to develop a causal model for product management performance that includes constructs related to roles and responsibilities, organization structure, and marketing processes related to product management. An empirical survey of 198 product managers from a variety of industries is conducted to test the causal model. The results of the causal model suggest that performance of a product management organization is driven by structural barriers in the organization, the quality of marketing processes, roles and responsibilities, and knowledge and competencies. The findings suggest that structural boundaries and interfaces are the biggest impediment to effective product management, followed by clarity of roles and responsibilities. The research highlights the importance of organization structure and effective human resource practices in improving product management performance.

Below is what I learned from it:


The overall model proposed for Product Management Excellence is:

Overall recommendation from the model and associated analysis are:

  1. Remove Organizational Barriers and Connect Silos: For example, between existing product management vs. new product management residing in different organizations)
  2. Do not expect product managers to learn on the job: Develop and provide formal training to augment knowledge / competencies / soft skills.
  3. Define Authority and Responsibilities Clearly: In many organizations, product managers have no clear authority and vaguely defined roles. This makes is difficult to deliver results.
  4. Allow Product Managers to focus on Strategy & Planning: 42% of product managers surveyed believed that they spent most of their time on tactical activities and coordination. They had no time to make strategic decisions.
  5. Institute Quality Product Management Processes: From Market Requirement Documents to SKU Planning, lack of effective processes increases non-value-added work and reduces effectiveness

How to get employee engagement in R&D strategy

Time and again, I have found that most employees do not understand or even know about the company strategy.  Corporate Executve Board has some good data in Get Your Frontline Onboard: Communicate, Clarify and Cascade CEB Views – Finance and Strategy:

A surprising number of employees don’t know what their company’s strategy is. A study by the International Association of Business Communicators found that only one in three companies say their employees understand and live the strategy. Robert Kaplan and David Norton, the founders of the Balanced Scorecard, found the situation to be worse. They found only 5% of employees understand company strategy. Without understanding, execution is impossible. Therefore, communication is critical, not only to promote understanding but to help employees appreciate how the strategy relates to what they do.

Of the three Cs, communicate and clarify have been discussed pretty thoroughly.  I want to reemphasize Cascade.  That is the crucial, and pretty much the most difficult portion of getting employee engagement.  Most front-line employees do not or can not figure out what they can do to help with company strategy.  Management needs to make the effort to actually help employees understand what behavior is expected of them.  Sending employees the strategy and asking them to follow it is not it.

A funny anecdote: The presidents of a company actually tried to enforce strategy buy-in.  The strategy was very generic (Reduce Costs and Increase Revenues).  At first, they communicated and celebrated the strategy and assumed everyone would follow it.  That did not happen.  Then the president set up mandatory meetings between mid-level managers and senior executives to get buy-in.  That did not work either because the mid-level managers did not know what they were supposed to do (nor did the senior executives).  These mandatory meetings became question answer sessions that generated no results.  The president than decreed that the mandatory meetings will only be about how mid-level managers would implement the strategy.  No questions will be answered.  That failed as well.  The strategy and the entire effort was then dropped.

I will leave you with the recommendations on CEB on cascade:

To help employees take ownership over their role in the execution, communications about strategy should always be accompanied by goals and metrics. These should be goals and objectives that employees can relate to and can be integrated with their daily tasks. Also, be sure to give them visibility into the goals that everyone up the line is trying to achieve as well so they understand how what they are doing contributes to the larger objectives. Ultimately, front line employees need to know:
What I need to do – goals and tasks
Why I need to do this – the value it provides the customer, the employee, the department and the organization
Don’t create too many goals. Prioritize to make it more manageable. If employees are overwhelmed by the scope of the strategy, or the number of goals they need to achieve, they are less like to perform well.


Social Networking Helps R&D Collaboration?

The article Frontiers of Collaboration: The Evolution of Social Networking in Knowledge@Wharton discusses how social networking (Wiki, Blogs, tweets, etc.) is replacing Knowledge Management and improving communication at the same time

Weinberger began the session by asking panelists what made the introduction of social networking tools different from previous technological endeavors to improve communication and collaboration. One significant issue discussed was how social networking compared with knowledge management (KM). KM systems first appeared on the scene about 20 years ago and once represented the frontier, embodying companies’ most innovative ideas for integrating internal access to disparate information in order to improve communication, collaboration and business processes.

before social networking tools enabled quick and casual communication, many bloggers in corporate organizations had “some KM tool where you captured the knowledge in the tool’s silo and assigned all sorts of tags, folders and so on to it. You would then pass the blog to your manager for him or her to [learn from] what you were writing.

Social networking is easing some of the frustration users in many organizations have encountered with traditional KM systems. Through use of Twitter and other tools, more of the intellectual capital that KM systems once guarded is flowing freely, in real time, inside and outside organizations. If an employee needs to find expertise or share information, he or she doesn’t have to work within the rigid confines of a KM system, or even the confines of his or her organization. Instead, the employee can use social media to collaborate with others and to find answers more quickly and put relevant advice into practice.

Clearly, social networking can add value to R&D community.  However, I am not sure if I would agree with this as whole heartedly as the authors / speakers about replacing KM.

One problem with social networking is the volume of information that can become available and time/effort needed to find the right information.  Consider if all employees in a 10,000 person R&D house started tweeting what they were doing – the signal to noise ratio would be terrible.  Add to this personal tweets and the entire system would become unmanageable.  Furthermore, how does one control the flow of information and ensure that proprietary information is not accidentally leaked?  The panelist thought that all that the risks outweigh benefits –  I am not sure I agree:

Fitton, whose consulting firm focuses on helping companies to use micro-blogging in a business environment, suggested that companies may find the “messy and random serendipity” of Twitter and other social networks to be more efficient than lumbering KM systems and processes. “It brings an infusion of humanity to business,”

There might be value to social networking in building R&D communities and helping virtual teams collaborate effectively, but the idea that real R&D knowledge can be shared effectively through micro-blogging sounds a bit simplistic.  R&D teams do not just need access to knowledge, they need access to the right type of knowledge at the right time.  If one is designing a new cell phone and has a question about what impact human body will have on reception, it does not help to go search through blogs, nor does one have time to do mass mailing / tweets to request help, weed out the responses and find the right person.

Underlying assumption for social networking in R&D world is that someone has the right information available at their finger-tips and are willing and able to stop what they are doing to provide that information back.  How likely is that going to be?  Not to mention the constraints that social networking tools like tweeter add to communication… Unlike the speaker, I am not sure that constraint is actually valuable:

But does the 140-character limit for posts to Twitter enable engagement, or is it “a sign of triviality?” asked Weinberger. “Constraints breed invention,” replied Shellen. Douglas added that communities using Twitter, Google Wave and other tools are creating their own etiquette. Panelists agreed that both the creation of etiquette for particular conversations and the sheer ability to engage in several discussions at once would be difficult using blogs and older forms of web content sharing programs.

There are more problems with sharing:

Lippe noted that, in the legal field, “there’s already a structure of knowledge, and most knowledge repositories and structures of the collaborative web have existed for multiple generations. So, the question is, how do you tap into them?” One core structure is attorney-client privilege, which Lippe said “has long preceded the information confidentiality and security regime that we all have now. It creates the structure of what you can and cannot share.” In the legal universe, he added, the messy serendipity of “horizontal” social networking cannot solve the hardest problems.

Just to be clear, I do not think that social networking does not have value.  If used properly, it can help companies build focused virtual R&D communities across geographical and cultural boundaries.  However, R&D managers will need to do a lot of work structuring and managing the flow of information so it be value-added.  Furthermore, objectives of social networking should be crafted very carefully and monitored consistently to ensure that it is indeed delivering results.


Three Lessons for Sustainable Scenario Planning

I have always been a fan of scenario planning.  It really does provide great insights into R&D strategy and allows organization to develop robust R&D plans.  The article Six Lessons for Sustainable Scenario Planning talks about how interest in scenario planning is increasing because of the turbulent economy:

One business discipline that generated a huge amount of interest during the recession was scenario planning. We wrote about it for Bloomberg Businessweek and advised many companies on it. The Corporate Strategy Board (CSB) ran a series of meetings around the globe on scenario planning where clients exchanged ideas and talked about how to implement the most successful practices we saw in our client networks. These discussions led to the six lessons below.

I will make it a bit simpler than the six lessons in the article – three points to keep in mind while setting up or maintaining a scenario-based planning process.  I had trouble with Scenario Planning at one of the organizations.  These three are my lessons learned from the experience.:

  1. Formalize Scenario Planning: Have clear ownership / accountability for scenario planning. Ensure that the responsibility / authority are clearly defined and delineated from other ongoing efforts.  Finally, define and expect clear deliverables and results from the scenario planning exercise.
  2. Develop and use Actionable and Plausible Scenarios: It is not easy to devise scenarios that engender useful discussion and lead to robust plans.  On the other hand, one of the biggest benefits of scenario planning is the discussion around assumptions of different scenarios.  All scenarios are based on assumptions.  Organizations should make these assumptions known (explicitly) and allow some discussion on them.  However, the assumptions discussion should be managed effectively and stopped at some stage – otherwise the scenario-based planning discussion never actually happens.  The CIA actually publishes very good global geopolitical scenarios that can be used as a foundation.  However, scenarios that you would have to use will depend on the level at which you are doing strategic planning…
  3. Integrate Scenarios into overall planning and risk management: Once results of scenario analysis are know, use them to drive strategic planning and integrate them into risk management process.  Nothing drives implementation as much as results…

Managing Project Execution Risks (wonkish)

The Project Management journal has an article called Managing risk symptom: A method to identify major risks of serious problem projects in SI environment using cyclic causal model.  The article lays out an interesting framework for managing project execution risks in large system integration (SI) environment.  Some of the concepts are work remembering.

Serious problem projects (SPPs) often occur, particularly in a system integration environment, and it is difficult to prevent them, since the relationships among phenomena that occur throughout the project life cycle are extremely complicated. Our goal is to make it easier to identify major risks by distinguishing phenomena that are sources of future SPPs from phenomena observed in actual field projects. By choosing several events whose causal relation is known to be cyclic, we constructed a causal model and clarified that it can contribute to the easier recognition of SPPs empirically, by analyzing actual SPP cases.

The overall message is to anticipate major problem spirals by the analyzing events, understanding if the problems are root cause of a death spiral or a derivative of the death spiral and then taking effective action not only to mitigate the problem (event) but also the underlying death spiral.

The paper is a bit difficult to read – probably because I am not familiar with Japanese project management terminology and because of Japanese English….  However here are the major take aways:


Risk events have different consequences depending on the development phase.  The article divides the project into three pages upper (proposal / award), middle (early development), lower (detailed development and launch).

The article lays out a model with three types of consequences of risks based on the phase (Devil Spiral and Death Spiral):
When an event occurs or it is anticipated, the article suggests to map them to the model based on the phase of the project and then determine if it is a Derivative event – a result of the spiral (as per the article a case that is derived from a death spiral) or an Accelerating event – a root cause that accelerates the spiral.  Once done, the idea is to actively manage events, understand the nature of the spiral and take counter measure to prevent the spirals from accelerating.
Finally, here is an example of the completed analysis:

Confirmation bias in R&D management

Here is a bit of a philosophical problem that I have been thinking about for quite some time.  In the scientific world, there are all kinds of checks on proposals/decisions/results before they are accepted.  In fact, skepticism is actually somewhat welcomed. Why are R&D management decisions not subject to similar level of scrutiny?  Time and again I have found that decisions of senior R&D executives are not challenged and debated.  If innovation can only happen when when there is questioning of status quo in R&D, why not the same for R&D management innovation?

The article Confirmation bias in science: how to avoid it summarizes the problem pretty effectively (albeit in the context of scientific research):

One of the most common arguments against a scientific finding is confirmation bias: the scientist or scientists only look for data that confirms a desired conclusion. Confirmation bias is remarkably common—it is used by psychics, mediums, mentalists, and homeopaths, just to name a few.

The article had three interesting examples of confirmation bias. The one that is most applicable to R&D management and organizational pride comes from 18th century France – where the need to maintain national pride and a belief that all is well led to an amazing acceptance of bad research / decision:

… Prosper-René Blondlot announced the discovery of N-rays. He was immediately famous in France, and very shortly afterwards, researchers from around the world confirmed that they too had seen N-rays. N-rays were an ephemeral thing: observed only as a corona around an electric discharge from certain crystals. They were only observed by the human eye, making them difficult to quantify.

But not everyone was convinced. Many researchers outside of France were suspicious of the number of claims coming from French labs for the properties of N-rays. In the end, an American scientist Robert Wood visited the lab of Blondlot to see it for himself. During one of the experiments he surreptitiously removed the crystal that supposedly generated the N-rays, after which Blondlot failed to notice the absence of N-rays. The N-rays failed to vanish when their source was removed.

From an observation of many firms from my management consulting days, I find that confirmation bias is even stronger in R&D management.  In fact, many senior managers seem to surround themselves with people who actually do nothing but confirm their decisions.  Below are what I think are root causes that encourage confirmation bias in R&D management and some thoughts on what could be done about them.  I welcome any comments and criticism.
First, the process of scientific critique takes a very long time. For example, from the same Ars Techica article, the evaluation of research took 24 times as long as the work it self:

… the total amount of time coding the model? Maybe 24 hours, total. OK, call it 36 hours with some debugging. Running the code to get results? Maybe a minute per parameter set, so let’s call it a month. So that’s 32 days from around 730 total. What was all the rest of that time devoted to? Trying to anticipate every possible objection to our approach. Checking if those objections were valid. Trying to find examples of physically realistic parameters to test our model with. Seeing if the code was actually modeling what we thought it was. Making sure that our assumptions were valid. In summary, we were trying to prove ourselves wrong.

This is not practical in R&D management world.  Clearly, if it takes two years to decide a course of action, no action can be taken.  This problem has traditionally meant that management decisions can not actually be discussed or questioned.  However, I am not sure that is accurate (more about it below).

Furthermore, scientific research review is easier because the experts in the area naturally form communities along the lines of discipline.  It is always possible to find the expert with right expertise if one searches long enough:

The question session was fast and lively. And, yes, after the session, a senior scientist approached me and told me in no uncertain terms why our idea would not work—that sound you heard was me falling down the hole in our model. He was, and still is, right.

R&D management, on the other hand reaches across disciplines and there are no experts that can question results.  More importantly, all disciplines traditionally report their needs, requirements and results in a jargon specific to them.  The only person who is authorized to bridge across the jargons is the senior manager. This authority and visibility gives senior managers a unique vantage and makes it difficult for anyone else to question their decisions.

Furthermore, scientific work / decisions can be replicated by others and results tested / verified.  This is not true in R&D management world.  Decisions have long-term consequences and once made, there is hardly ever a way to test what would have happened if some other decision was made (because economic and competitive landscape changes fundamentally by the time results of decisions are visible.  This makes it difficult  for anyone to question and or critique R&D management decisions.

Finally, the consequences of failed scientific work are somewhat limited – only the lives of researchers are directly impacted.  The consequences of failed R&D management decisions are often much larger and can have significant impact on thousands of lives.  This pressure along with lack of sufficient ability to measure the effectiveness of decisions encourages R&D managers to surround themselves with people who confirm their decisions…

So what can be done about the confirmation bias:

  1. Encourage constructive criticism of R&D management decisions.  Even if the time-frame for questioning is much shorter than scientific work – an hour or a week, the fact that others view points are on the table will will actually have value in itself.  This is even more important in the new world where decisions are impacting incredibly complex systems that no one person can understand.
  2. Implement processes, tools and systems to make information necessary to make R&D management decision more broadly available: Even though disciplines participating in R&D and R&D management have all specific jargons, they are still tied together by a common thread of achieving desired objectives.  It is important to leverage this common thread and set up tools to actually elucidate the information that will let everyone – not just the R&D manager  – see the data required to make effective decisions. This will have an added advantage of validating data and making sure there are no errors.
  3. Quantify gut feelings that lead to decisions: In the end, since R&D management is always based on intuition – since no one can actually foresee the future when the results of those decisions are available.  This has traditionally meant that these qualitative decisions are not quantified in any way.  Standardized checklists are an easy way to quantify what the gut feel has been.
  4. Document decisions: Once the decisions are quantified, it is easy to document them in the tools and systems we talked about in step 2. If the decisions are easily accessible, it makes it possible to learn from them and understand why things worked or did not work.  It also makes it possible to recover or redirect if things indeed do go wrong.
  5. Develop intermediate milestones, inchstones or check points: If the only way to check the results of the decision is at the end, there is no way to recover or redirect.  By putting in place intermediate check points, especially based on key assumptions identified (step 3) and documented (step 4), R&D managers can improve their chances of success.
  6. Develop dashboards to monitor results of decisions: Combine systems in Step 2 with check points in Step 5 to develop dashboards that quickly show if things are not working – giving advance warnings to prevent catastrophic failures…

Again, I welcome any criticism (constructive or otherwise)!


How to keep your top talent

Here is what the pointy haired boss suggests:

Here are three ways to keep the top talentfrom the corporate executive board:

  1. Get to know the top talent
  2. Don’t mistake current level of performance with future potential
  3. Differentially reward top talent

Here is a bonus also from CEB – things to keep in mind for motivating your teams:

His core aim is to clearly communicate a consistent vision and then drive accountability for executing it. He’s done this by avoiding five dysfunctions on his staff that aligns well with Lencioni’s work. Lencioni’s five dysfunctions are: 1. Absence of trust 2. Fear of conflict 3. Lack of commitment 4. Avoidance of accountability 5. Inattention to results