Sunday, November 16, 2008

The Truth About Forecasting: Part One--The Five Deadly Errors

I've been reading Nassim Nicholas Taleb's book "The Black Swan," which has gotten a lot of attention recently due to the financial meltdown. I may go into Taleb's core arguments in a future post, but one of his arguments is that forecasting of things that aren't physically based is all but impossible. Here's an example: We've learned how to forecast the weather fairly well, at least in general terms over short periods of time, because we increasingly understand the underlying physics. However, the five-year forecast that was undoubtedly assembled by product planners at GM last year has long since been shredded and recycled. The numbers, even for 2008, were useless because while the forecast might have had some allowance for the impact of $4/gallon gasoline, it certainly didn't allow for the possibility of a financial meltdown and complete collapse of the consumer credit market.

This brings me to my own experience as a forecaster over a nearly 30 year career in high tech. It's my belief that most forecasts aren't worth the paper they're printed on, because they're:
  • Obvious
  • Based on false assumptions
  • Biased to satisfy the audience
  • Cover too long a time horizon
  • Don't (and can't) take into consideration massive, but in hindsight predictable, discontinuities such as our current financial mess
Here's a case that demonstrates four of the five errors. In 1980, my first job out of business school was at Hewlett-Packard's Corvallis (Oregon) Division. At that time, Corvallis was responsible for HP's calculator product line, but they also had a line of personal computers called Series 80. The Series 80 machines were based on a processor designed by HP and derived from calculators. They were incompatible with any of the other PCs in that still nascent market, so software, hardware, peripherals—everything—had to be designed especially for them.

I was hired to be the Product Manager for Series 80 software, and part of my job was to forecast the potential sales of new software products. Since our software only worked on our computers, we had to start with sales of Series 80 machines, which were a few tens of thousands a month and growing, modestly. I was responsible for an array of software packages, each of which had its own appeal, including a database, a word processor, and even a Series 80 version of VisiCalc, the original spreadsheet. However, our primary market was engineers, the market for most of HP's products at the time. Were we going to branch out and try to reach consumers and businesspeople? That could make a big difference in the potential market size, and if our software was very successful, it could drive sales of computers.

One of my first questions was whether I could go out and poll current and potential customers to find out their receptivity to our new products. That idea was shot down, because we didn't have the budget for primary research. The industry was so new that there weren't any research services that we could subscribe to in order to independently gauge the market potential (and, as we'll see later, their own forecasts were likely to be of dubious value.) That's when I was introduced to the concepts of "WAGs" and "SWAGs" by one of our most experienced product managers.

"WAG" stands for Wild-Assed Guess, and "SWAG" stands for Silly Wild-Assed Guess. Neither WAGs nor SWAGs are entirely guesses, but they're close. When you don't have hard historical information, you have to estimate what percentage of the existing installed base will buy the product and how many new users will also buy, every month and every quarter, for five years. So, you start with a "rule of thumb"—say, 10% of your existing and new PC buyers over time will buy a particular piece of software, with that number going up to 15% in Year 2 and 20% in Year 3. What's your proof? You don't have any, but it sounds reasonable. By using WAGs and SWAGs, I committed the error of basing the forecast on false (or at least dubious) assumptions.

Once I completed the unit sales forecast, I then had to determine what price we should sell each product at. HP had sold software for "personal computers" over the years, but these were massive, specialized desktop computers that sold for many times the price of our Series 80 models. The company's prevailing model for pricing software for these models was to look at the software's manufacturing cost, and then mark it up by a given percentage. (Development costs were part of HP Labs' budget, and were not factored into product costs.) That's where I began with the pricing for Series 80 software, but it became clear in some cases that the software would be too expensive for buyers, and in other cases, the profit margins were simply too high. (Too high? In those days, HP management felt that charging too much for products—based on their costs—was unethical.)

Now I had a units forecast and a revenue forecast. I even used a WAG to estimate price changes over time. But before I could formally present them to management for approval, I had to calculate the overall rate of return on the product—too high, and the forecasts would go back to be redone with lower profit margins; too low, and the product would be scrapped. My first time through, the margins were too low, so I was told to go back and try again. I raised the units forecast over time, but it was unrealistic compared to separate forecasts for hardware sales, so I fiddled with initial prices and changes over time in both prices and market penetration until I got within the company's rate of return guidelines. (It turned out that competitors were selling comparable products for considerably more money, but those margins wouldn't wash within HP Corporate.)

So I had committed my second error, that of biasing the forecast to satisfy the audience. Virtually any connection between the approved forecast and reality was lost in order to meet HP's financial guidelines. But wait, there's more. My forecast had to cover five years. We now know that five years is a very long time in the personal computer business, but it was all new back then. So, I forecasted five years of growth, assuming updated versions of the software over time. What happened was that the next year, 1981, IBM introduced its first PC, which revolutionized the industry and created a new standard, and in 1984 the Apple Macintosh came out, helped in no small part by two PC product managers from HP Corvallis who went to work on the Mac in 1982. The Series 80 product line simply couldn't compete in this new world, and was discontinued altogether in 1984.

With my five-year forecast, I committed errors three and four: First, five years was far too long to forecast, given the rapidly changing nature of the PC industry. Second, there was an "unknown unknown" being developed in Boca Raton, Florida, which made my entire forecast and product plan moot. In hindsight, the flaws of the Series 80 platform made it very vulnerable to competition, but I was too entrenched with the nuts and bolts of getting my products out the door.

In Part Two of this discussion, I'll discuss the problem of obviousness.




Reblog this post [with Zemanta]

No comments: