In the late 1990s knowledge management was one of the hottest areas in information technology. The internet was creating a new ‘information age’, the knowledge economy was in full swing and HP CEO Lew Platt provided a financial rationale for investments in this area with his famous quote: “If only HP knew what HP knows, we would be three times more productive.”
Ten years on and a knowledge-driven leap in productivity has not transpired with businesses spending less as a consequence. In part this may be due to a grossly inflated expectation as to the benefits systematic knowledge management would bring, but in part it may also have sprung from the idea that knowledge was an asset that businesses already owned but were failing to leverage. As a result the starting point was data that the business already had – not the data that the business needed to have – with the knowledge management process aiming to transform that data into information which provided insight to shape the actions that would create value for the business.
While the idea of using existing asset better makes such a flow logical from a financial perspective, it is completely the reverse to what a strategic approach would be. Given the importance of high quality insight to good decision making, a strategic approach to insight collection is merited, arguably essential. Such an approach would both prioritise the insights to be collected and put processes and systems in place for the acquisition, storage and sharing of the component data sets.
Achieving this requires inverting the traditional knowledge management flow of data to information to insight to action to value.
The starting point is the desired value the business wishes to create, both for customers and itself. This identifies the possible actions the business can take. The need to select between different options determines the insights required. These insights will be provided by information built up from raw data. Rather than working from what data the business has, the focus is on identifying of what data the business should collect given how it plans to create value.
Deciding what data needs collection requires clear specification of the richness – or maturity – of insight required in each area. In broad terms there are three maturity levels.
The first level is simply measuring – just monitoring what is happening currently, whether that be existing performance to help a business understand how well it is doing or tracking external variables, perhaps using data supplied by an industry association or market research. The Measure maturity level will help a business understand which areas require attention (e.g. performance is declining or below target) or potential opportunities (e.g. reported growth in a particular segment). In both cases more information is required before deciding how to proceed.
This additional information is furnished at the second maturity level which augments measurement with explanation to enhance understanding. The Understand level helps a business clarify what it should do differently.
The third level incorporates both the breadth and depth of insight needed to support decisions where significant investment is involved and a number of options exist. The Select maturity level enables a business to choose between the different alternatives that it has.
The higher the financial and strategic impact, the greater the insight maturity needed. As the maturity level increases, so do the number of required data sources, not least because the necessary information is increasingly external to the business. And whereas the Measure level of maturity requires just quantitative data, the higher levels require additional qualitative insights – depth and breadth increasing with the transition from Understand to Select.
For a lot of operational decision making, only the lowest maturity level may be required – for internal issues managers will be sufficiently close to know the cause if things start to go wrong. Also much of the underlying data will be generated by enterprise resource planning and customer relationship management systems
. The challenge is then clearly defining what metrics and reports are required so that the data warehouse can combine different data sets from different systems to generate the business intelligence required.
But for more strategic decision-making, the underlying data required is unlikely to be process and systems-driven. Some external information will have to come from ad hoc research. Other data will come from one-off interactions that staff have – the information collected from a new recruit about his previous employer, conversations with suppliers about trends they are observing, comments from end users on how their needs are changing. Since much of this qualitative information cannot be systematised, there is a higher risk of it not being retained and registered with the right people.
But by defining what is important, what types of data will help improve the quality of decision-making and who – e.g. Chief Information Officer or Chief Strategy Officer
– is responsible for improving the quality of insight on each dimension, the chances of this information actively being sought and passed to the right people is much increased.
The involvement of everyone – staff, suppliers, partners – in the quest for insight further increases the quality of data being collected.
Communicating the key questions that need answering is a start, but the wisdom of employees – particularly with regard to Select decisions – can also be tapped in more fun ways using decision or prediction markets.
Prediction or decision markets provide estimates of what will happen through the aggregation of opinions using a mechanism similar to the stock market. Participants in the market buy or sell shares of ‘claims’ about a particular prediction. Popular examples would be on the outcome of presidential elections and the box office takings of new films or Oscar results, but they can also be used for improving the quality of predictions for business-critical events. The claims usually state that if proved true, owners of one share will receive a given amount, often $1. Participants can buy either Yes or No shares, depending on whether they believe the claim will transpire or not. They can make money by either holding on to the shares until the claim has proved to be true so they receive the pay-out or by selling the share in the market if they feel the claim has become overpriced.
The uncertainty needs to be distilled to some degree of choice – for example which of two or three technologies will prevail – and for the results to be measurable, e.g. the technology becoming declared the industry standard by a qualified body. Where such definition or measurability is less clear, it can be manufactured through the incorporation of timescales and the creation of a threshold. Companies seeking input from staff on whether a new product will be successful can offer staff the opportunity to buy yes or no claims on whether it will achieve thethreshold sales or gross profits for it to be deemed a success – for example sales of $50m or £25m in its first year after launch.
From a business’ point of view, aggregating opinions produces a better forecast than that of almost any participant in the market. As James Surowiecki described in the Wisdom of Crowds, crowds are almost always smarter than the smartest people in them (so long as the behaviour of one participant does not influence that of another). Also the profit mechanism means that people with good insight are rewarded while those with only poor predictive abilities are discouraged from trading. Participants are incentivised to seek out information as they profit from doing so. Finally there is no hierarchy – the voice of a senior manager does not receive disproportionate weight or discourage others from sharing their opinions.
The real value of an insight strategy comes from defining where higher levels of maturity are involved. Simply defaulting to the Measure level – and only collecting qualitative insight as required – is understandable and probably reasonable for most operational decisions, but it does potentially build in a delay before problemscan be resolved. If the required insights are business-critical and their collection takes time, e.g. because research with parties external to the business is required, such delays could prove very costly.
Equally while a case could be made for delaying Select-level insights until they are required, a business is likely to know the decisions it will need to take, so planning what information is needed in advance will enable both better quality insights to be collected (versus a quick hit of fact collection close to the deadline) and a quicker decision to be made.
Thinking in terms of maturity levels helps a business improve both relevance and timeliness of insight collection – ensuring the right people have the right information at the right time to ensure the right decision is made.
About the author:
Jack Springman is Head of the Corporate Advisory Group of consultancy Business & Decision