Why Crowd Science Makes Sense: Part II
In my last piece discussing why crowd science makes sense for innovation-focused businesses, I made the case for why leveraging data analysis and algorithms is the key differentiator between companies that use crowdsourcing, and companies that properly utilize crowd science.
The reason it’s so important to note the difference between the two is that, while crowdsourcing as a practice is often effective for mass ideation, it only truly becomes strategic and predictable when it’s backed by a strong, systematic approach.
Rankings and Behaviors
Returning to the concept of the Idea Threshold — “the point (or number of criteria met) that means an innovation has potential and should be supported” — it’s necessary to detail the connection between an idea’s value and the predictability of the ideator’s behavior.
Experiments done by Mindjet’s data scientists show a direct correlation between the behavior of the crowd, the predictability of successful innovations, and defining what it means for an idea to be “good.” When using an innovation management system, organizations rely on graduated ideation, which is driven by things like votes and interactions. In order for ideas to rise to the top, they have to pass what is, in essence, a series of tests that prove the sustainability and resiliency of the idea. In a nutshell, as the criteria for what makes good ideas good becomes exponentially stricter, the overall number of good ideas drops, as does the number of innovators who come up with those good ideas.
Without getting too high level, this research basically shows us that top innovators tend to cluster around great ideas. This knowledge allows for tracking and predictability that can help decision-makers not only shape how they use innovations born from the crowd, but how they execute crowdsourcing itself.
3 Things to Remember
As we delve deeper into the realms of crowd science and how we can apply it to enterprise innovation, it’s crucial to remember that, like most activities with such an extensive number of variables, nothing is 100% exact. Here are three critical considerations:
1. Engagement matters more than you think. From the research that’s been done by Mindjet, it’s clear that engagement plays a mammoth role in the effectiveness of crowdsourcing. The behavior of innovators (those who generate great ideas) and discerners (those who regularly choose to interact with ideas that do well) indicates that non-experts often drop out of the process before completion.
2. Numbers speak volumes. As ideas face stricter graduation criteria, the fraction of users who are considered discerners decreases. However, it’s a slow drop — consider what late-stage actions can be taken to maintain involvement.
3. Be patient. Research suggests that, on average, under 2% of the crowd actually qualify as innovators; yet, almost 70% are users are discerners. Additionally, almost 6.4% act as both, and 22.43% are neither. The 6.4% represent the crowd’s sweet spot: high valued users who are both innovators of high quality ideas and discerners of idea quality. Identifying these individuals should be a top priority.
At the end of the day, the value of innovation management and crowd science is that it allows you to take innovation efforts from lucky to logical.