Future Results Not Guaranteed

Contrary to what vendors tell you, computer systems alone are incapable of producing accurate forecasts

It's been more than two years since Nike Chairman Phil Knight owned up to the sports shoe giant's disastrous $US400 million experiment with demand forecasting software. The headlines are well known: Nike went live with its much-vaunted i2 system in June 2000, and nine months later, its executives acknowledged that they would be taking a major inventory write-off because the forecasts from the automated system had been so inaccurate. With that announcement in February 2001, Nike's stock value plummeted, along with its reputation as an innovative user of technology.

But what has since trickled out in court documents from shareholder lawsuits may be even more disturbing because it shines a harsh light on the inherent limitations of demand forecasting software. According to the documents, i2's supposedly state-of-the-art forecasting system couldn't communicate with Nike's existing systems, which impaired its ability to analyse large amounts of product information. At some point, the data even had to be entered in by hand, greatly increasing the chance for mistakes. And the forecasts themselves were way off. Relying exclusively on the automated projections, Nike ended up ordering $US90 million worth of shoes, such as the Air Garnett II, that turned out to be very poor sellers. The company also came up with an $US80 million to $US100 million shortfall on popular models, such as the Air Force One.

Nike isn't the only company with a forecasting horror story. Corporate America is littered with companies that invested heavily in demand software but have little or nothing to show for it. Goodyear, for example, implemented a demand forecasting system in mid-2000 but hasn't shown significant improvement in managing its inventory, and last year the tyre company lost more money than the year before.

Yet vendors and academics are still pushing forecasting software. In 2002 alone, companies spent $US19 billion on demand forecasting software and other supply chain solutions, according to IDC (a sister company to CIO's publisher). And in a speech in February, Stanford University supply chain guru Hau Lee extolled the virtues of harnessing software to extract customer knowledge in order to forecast demand.

Many CIOs, however, remain sceptical. Privately, members of Lee's audience complained to a reporter present that the ability to accurately forecast could hardly be taken for granted. And according to a recent Booz Allen Hamilton survey of 196 senior executives, 45 per cent said that supply chain technology in general had failed to meet their expectations. More than half - 56 per cent - blamed the shortcoming squarely on demand forecasting software. From hard experience, a growing number of CIOs now realise that computer systems alone are incapable of producing accurate forecasts.

There are a number of reasons why. To begin with, forecasting systems are only as good as the data put in them and, due to the complexity of modern supply chains - where a company wants to collect information about multiple products from multiple customers and suppliers - more often than not the data isn't accurate enough. Furthermore, software can't predict the future, particularly sudden, unexpected shifts in economic or market conditions. Nor can it exercise the kind of rational analysis or judgement that human beings excel at. Hence, demand forecasting technology is inherently limited, and companies such as Nike and Cisco that rely on it without an institutionalised set of human checks and balances will invariably end up in trouble.

"Demand forecasting sounds scientific," says Sumantra Sengupta, CIO for the Scotts Company, the world's largest supplier of consumer lawn and garden care products. "But I would say that if you looked at the split between people, science and process, people are half the equation. Algorithms are algorithms. That is not what will win the game for me."

Good demand forecasting requires a combination of accurate data and smart people. Up-to-date sales data and point-of-sale (POS) information will almost always improve a forecast. So will having the processes and people in place to make sense of anomalous results or simply to check computer-generated predictions against the pulse on the street.

"Anyone who thinks you can do it with just mathematics and statistics is only partly right," says Doug Richardson, CIO of electronic products maker Vicor. "Human intelligence is also required."

Page Break

The Whiplash Effect

Before vendors began selling demand planning software, forecasting was essentially a balancing act between competing factions within the enterprise. The marketing department would set a high target because it wanted the product to be a success, says Tom Burns, CFO for the enterprise network division of telecommunications equipment maker Alcatel. Salespeople, on the other hand, would come in with conservative forecasts since they wanted to keep their sales quotas low and manageable. "If marketing says we are going to sell $US150 million, and the sales guy says we are going to sell $US75 million, what do we tell the supply chain guys to build?" Burns asks.

One can see the appeal of a computerised system that could provide an objective answer to that question. Furthermore, the maths needed to build these systems has been around for nearly 75 years. It was Ronald Fisher, a British mathematician working after World War I, who first conceived of a system that could take numbers, look for patterns and then make predictions based on those patterns. The result, the classic regression model, is still used in 90 per cent of demand planning software today.

Regression essentially takes multiple variables, makes inferences about the relationships between them, and ultimately charts the result as a curve showing upward or downward trends. The curve can be extended to predict future results. For example, a regression study of the rate of death among people between the ages of 20 and 80 would, despite numerous exceptions, find a general trend that as people got older, the rate of death increased. You could then predict that an 81-, 85- or 90-year-old person would be even more likely to die than someone who is 80.

The problem is that regression analysis - and any other statistical model a demand forecasting system may use - requires clean data and a potential relationship among the variables, says Rob Cooley, US technical director at KXEN, a demand forecasting vendor. In Fisher's day there wasn't the computational power to consider more than a few variables, which made it easier to focus on the accuracy of a few data points, like in the rate of death example. But today, computerised systems make it possible to consider hundreds, if not thousands, of variables - anything from weather to time of day - and a correspondingly vast number of data points.

Most of these data points are inaccurate, or more specifically, are only an estimate of what actually happened. The most common example is guessing what consumers bought based on what the company itself sold. While a retail store knows how much of a product it sells, the manufacturer only knows how much the retailer orders - and more often than not there are distributors acting as middlemen to further muddy the transfer of information about sales.

Logistics executives at Procter Gamble studied how this dynamic affected demand planning and found that the further away your data is from the point of sale, the more data accuracy decreased and forecasting errors increased. For example, PG found that consumers bought its Pampers nappies at a fairly steady clip and that retailers' orders reflected this: Orders had moderate swings one would associate with relatively flat demand. The distributors, however, would react to moderate increases by not only increasing their orders, but by upwardly adjusting their reserve stock, signalling a much larger increase in demand back to PG. The manufacturer, in turn, would ramp up its Pampers production and continue the bullwhip effect down the supply chain. Ultimately, everyone would be left holding excess inventory.

Page Break

Garbage In, Garbage Out

The best way to avoid the trap of overforecasting demand is to use point-of-sale information directly from the retailer. Since POS data is an accurate gauge of consumption, it improves the reliability of a forecast. That is how Scotts CIO Sengupta was able to improve his company's forecasting results. By using point-of-sale data, Scotts increased its forecast accuracy by more than 30 per cent in one year.

Before it started using POS data, Scotts would forecast the demand for its products at the national level - but not at the individual store level - for each of its customers, such as Wal-Mart and Home Depot. Each forecast would take into account how much each customer ordered of, say, a particular type of fertiliser in the past and combine that with other factors such as expected weather patterns. Since orders were simply an estimate of retail sales, the process left Scotts susceptible to the bullwhip effect. Furthermore, the sheer volume of the orders greatly inflated the forecasts' margin of error.

Now that Scotts has point-of-sale information from each retail outlet, its forecasts are more accurate and the risk of bullwhips is all but eliminated. Furthermore, the POS data lets Sengupta produce smaller, more detailed forecasts for each individual retail outlet if he wants (Sengupta says that Scotts actually forecasts in groups of stores to help reduce the impact of a one-time event in one store that wouldn't be replicated in another). Having many smaller, more accurate forecasts further reduces the overall margin of error.

In improving its forecasting process, Scotts has an advantage: The retail and consumer packaged goods industry is well ahead of the game when it comes to sharing data such as point-of-sale information. Most companies in this industry follow the blueprint laid out by the Voluntary Interindustry Commerce Standards Association subcommittee on collaborative planning, forecasting and replenishment.

In the rest of the world, however, most companies aren't in a position to get POS information from their customers. In the first place, few companies collect product-level data at the point of sale. And second, many aren't willing to share data that has traditionally been viewed as a closely guarded competitive secret.

But that doesn't mean you can't get better data and use it to improve your forecasts. In Europe, for example, Scotts gets POS information from only its three biggest customers or 20 per cent of its business there - the others either aren't able or willing to share it. "In Europe we understand that we can't get point of sale, but we still try to get as close to the point of final consumption as we can," Sengupta says. In this case, that's when sale items leave the distribution centres. While it isn't the same as point of sale, Scotts at least knows where its products are going and how much has actually been sold to retail outlets, which is more accurate than traditional order information. Furthermore, the distribution centre is several days closer to the eventual point of sale than Scotts' own warehouse, making that data a better indicator of current market trends.

There is other data available that can help CIOs improve the data in their forecasts. Imperial Sugar Vice President and CIO George Muller says that his company combines the order information it receives from its customers with market intelligence reports from Information Resources - data about what actually gets sold in stores. The IRI sales data, which will show, for example, how much sugar was sold in Atlanta area supermarkets, can serve as a surrogate for point-of-sale data. At the very least, it gives Imperial Sugar something to check its order data against.

Page Break

Taking the Market's Pulse

Even if a demand forecasting system had 100 per cent accurate information, there is another problem: The past can't predict the future. Computer-generated forecasts use historical data to make assumptions about what will happen, but there is no way for them to anticipate major market changes. For example, Belvedere International, which is based in Ontario, Canada, makes skin-care products. When SARS broke out in Toronto, Belvedere sold more than a year's worth of its One Step hand disinfectant in a month. No forecasting system could have predicted that. Belvedere has kept its assembly line running 16 hours a day, six days a week - modifying production of other goods in the process - just to keep pace with demand. "It's no different from forecasting the weather," says Gene Alvarez, Meta Group's vice president of technology research services in the US. "Once in a while something the model couldn't figure out catches them off guard. Same thing happens with consumer taste and demand."

1 2 Page 1
Page 1 of 2
7 secrets of successful remote IT teams