Innovator’s guide to “purple cow” breeding, Part 2
(This is Part 2 of the series. See Part 1 by clicking here)
What sucks about the traditional innovation methods?
In the last post, I briefly touched on the traditional innovation methods, without going into too many specifics around them. Let’s have a closer look at what are the most common methods and what are their shortcomings, and then we’ll see how the methodology I am talking about addresses them.
In my experience, the sources of innovative ideas generally fall into these categories:
- An insight by a particular person;
- “The voice of the customer” (surveys, sales team feedback);
- Focus group brainstorming sessions;
- External Vendors or analysts, trying to sell something.
Before I dig into these, I would like to point out that these all are valid innovation sources. The only drawback of these is the variability, or unpredictability of the positive outcome, associated with them.
What makes these sources unreliable? Let’s explore it.
An Insight: When an Idea becomes a Person
This is the case when an influential person (often a respected Subject Matter Expert or a senior executive) gains an insight, either by the means of past experience (gradual), or by an exposure to some new information, when “it all suddenly clicks”. The person them becomes a “sponsor” or an “advocate” for the idea, and typically seeks to build a supporting case with information that confirms the idea’s matter.
In many cases, however, the sponsor may unwittingly fall prey to brain’s natural tendency to provide validation to something that it has accepted as part of itself. The idea thus becomes the person, which can get in the way of its objective analysis. The questions like “what population of our customers this is valid for?” and “what is the likelihood that customers will be happy to pay more for this?” will be either skirted around, or will be supported by carefully selected data, while ignoring anything that may point to contrary. I’ve been personally guilty of falling into this trap. Lessons learned.
“The Voice of the Customer” (that’s telling you bollocks)
Everybody knows the anecdote about the “if I asked the customers what they wanted, they would have told me faster horse”. It clearly illustrates the problem with relying on customer’s input, which is this: when having a discussion with a customer about their needs, they will inevitably frame the discussion around the way they go about getting their job done now.
For example, if a “job to be done” is “to provide company’s employees with a reliable e-mail and calendaring service” and they are using Exchange to do it today, whatever ideas they have around doing it better will be centred around running their Exchange more efficiently. Worse still, if you talk to five different customers, all using Exchange, the ideas they come up with may land up being either incompatible or conflicting with each other.
These are typically conducted around “customer’s needs”, that were collected by listening to the voice of the customer via various channels. Probably no need to spend too much time on this – if we had bad data to start with (see above), what hope do we have in coming up with something useful?
I cometh to you bearing gifts: Vendors and Analysts
In my opinion, I covered this case in sufficient detail in my post “Mid-market ‘Innovator’s Dilemma’”. In essence, if you’re not sufficiently big to bend Vendors roadmaps to your will, it is quite probable that their wares will fit your company’s business like a gum boot – you can walk in it all right, but it is unlikely to help you put you best foot forward. And, um, they sell the very same thing to all your competitors out there, so there goes your “differentiation”.
Same goes for Analysts – their research is driven by their biggest accounts, and is available to anybody who cares to pay.
So, what can be done differently?
To summarise the analysis above, the biggest contributor to the variability of the outcome of the innovation process is the poor input data. If we can fix that, we can improve our chances for success.
This is exactly what the Outcome-Driven Innovation (ODI) process has been developed to address. It changes focus of analysis from tweaking a “solution” (which can be a product or a service in use today), to the job that the customer is trying to get done.
As I mentioned before, jobs themselves rarely change; but the means of getting them done (products and services) evolve all the time. For example, we needed to store, organise, provide access to, and secure information for ages. We still do, just the means of doing it are different from what they were a few years ago, and they keep on changing.
What the ODI process involves?
- To determine who the information will come from, a growth source is picked (see “Major growth sources” in Part 1);
- Then, a “job executor” is selected, which is a function inside customers’ companies, for whom the new value will be created. In our hypothetical example above, “job executor” is the person responsible for providing their company’s employees with email and calendaring services, irrespective of where they sit on the org chart.
- In the next step, the “job to be done” of interest is mapped out, using the universal job map which consists of eight steps: define, locate, prepare, confirm, execute, monitor, modify, and conclude. For each step, customer’s desired outcomes in relation to speed, stability and output are determined.
- When all desired outcomes have been collected, a representative population of the prospective customer base is interviewed to assess (1): how important each of the desired outcomes to the job-to-be-done; and (2): how well their existing solution helps them to achieve each desired outcome. The percentage of the respondents who rated a particular desired outcome as “Important” or “Very Important” then becomes the overall Importance value for a given desired outcome; likewise for Satisfaction.
- The output of the collection process is then ran through an “opportunity score” formula, where Opportunity = Importance + max (Importance – Satisfaction, 0), producing one score per desired outcome.
- The results are then plotted on a graph Satisfaction/Importance, which then highlights three general groups of discovered needs: Overserved, Adequately Served, and Underserved.
In the next instalment, we will have a look at how this data can be used to reliably create our “purple cow” ideas, prioritise product development pipeline, tune market messaging, and catch a forming disruption.