Future Results Not Guaranteed

1 2 Page 2
Page 2 of 2

Even forecasts that are made with a limited number of variables and with accurate data will be off. They still make the fundamental assumption that what was true yesterday will be true tomorrow. But because the data about a change lags behind the change itself, it takes human market watchers to note business climate alterations.

Vicor, which manufactures power converters for electronic circuit boards, found this out at the beginning of the recent economic downturn. Until a year ago, the company had used a homegrown forecasting system that it had built in 1993. CIO Richardson describes it as a straight-line forecast based on sales history. Company executives relied solely on the automated forecasts to predict demand for their products.

"It was reasonably good in the 90s when demand was increasing at a nice steady rate," he says. "Where it broke down was when the product mix increased and the business downturn started."

Vicor didn't see it coming. In a conference call in April 2001, CEO Patrizio Vinciarelli said that shipments "fell off the table", and the company was left with a massive inventory glut. "When the future doesn't resemble the past, none of this forecasting software works well," Richardson says.

The mishap taught Vicor the necessity of factoring human intelligence into its forecasts. In order to make sure that it isn't caught off guard again, the company set up a dual forecasting process in which the sales department comes up with a forecast and the computer system, which was upgraded a year ago, makes another. The two are complementary; the sales department is too conservative with its forecasts (Richardson thinks the salespeople are merely cautious; a cynic might point out that they are compensated for selling above quota). On the other hand, the computer system won't necessarily pick up on changes in the market that salespeople can often see.

For example, month after month, one telecom customer of Vicor kept placing the same order, and month after month the computer spit out a flat forecast. But a sales manager in the field found out that the telecom increased its order with another supplier whose parts are used in the same product as Vicor's. The sales manager talked to the telecom company and found out that the company had indeed decided to ramp up production, and armed with that information, Vicor increased its production as well, and was thus prepared when the telecom placed a bigger order.

While Vicor uses computer-generated demand forecasts as a check and balance for the human-generated forecasts, Scotts takes a slightly different approach. It takes its computer-generated forecast and distributes it to designated forecast planners for feedback. The planners, who are experts for the store and area they represent, make changes based on their expertise. For example, a planner in the Northeast might lower a forecast due to bad weather that limits gardening, or another might increase a forecast if he knows that a particular store is planning a promotion. Scotts takes one unusual step to ensure that the planners have access to the most up-to-date information: The planners actually work in the office of the customer company's buyer. If, for instance, the planner represents Home Depot in a particular region, he works in the office of Home Depot's buyer for that region. Sengupta says that the close proximity fosters collaboration between the planners and Scotts' customers.

In the end, the demand forecasting failure at Nike and other companies can be laid squarely on the shoulders of executives who put too much faith in technology. Court records in the lawsuits by shareholders against Nike reveal that executives for the company didn't even hold meetings to review and discuss the computerised forecasts that turned out to be so disastrously wrong. In other words, Nike management neglected to put in place a high-level process of human checks and balances for the computerised forecast. While that negligence actually enabled Nike executives to successfully argue that they were initially unaware of the flawed forecast that was generating such a huge inventory glut, it was a Pyrrhic victory. The company still lost $US180 million in sales and a third of its stock market value.

The Nike case powerfully illustrates that forecasting, no matter how advanced vendors claim their technology is, has to be an executive-level process. Executives need to review the computerised forecast and analyse how it squares with information from their sales and marketing reps, and then sign off on a number that the whole company can live with. At Alcatel, executives meet on a regular basis to dissect and discuss forecasts, which are produced by combining computer readouts with human intelligence.

"The final meeting here is attended by me, the head of supply chain and the heads of marketing and sales," says Alcatel CFO Burns. "We all have to approve the decision. We live and die by it."

SIDEBAR: 3 Demand Forecasting Myths


Inaccurate forecasts can still be useful as long as you treat the result as a guide rather than the gospel. At the very least, having one forecast for the whole company keeps departments from coming up with their own grossly different forecasts.


While number crunching is important, what will ultimately make or break a forecast is how well you know your customers and the market. That requires a sales force that can both communicate with customers and honestly share that information with the rest of the company.


Contrary to what vendors want you to believe, most forecasting software is pretty much the same. The algorithms have been around for so long now that it is unreasonable to expect that one system's maths is better than another's. The important thing when choosing demand software is selecting a system that's robust enough to handle the amount of data that you intend to enter.

Copyright © 2003 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
7 secrets of successful remote IT teams