by Fred Hapgood

Many Working Together: Massively Parallel Projects

News
Feb 01, 20026 mins
Networking

Product names are not always obvious?think of the anxiety drug Alprazolam, for instance?but sometimes marketers get to go home early. One such moment passed in the early ’90s when a computation technology called massively parallel processing (MPP) launched its bid for CIO mind-space. The name came ready-made from the scientific and technical domain, where it referred to the practice of throwing very large numbers of processors at resource-hungry problems such as fluid dynamics, seismic processing, molecular modeling and astrophysical simulations.

The term resonated with a developing issue in enterprise computing: By the early ’90s CIOs were well aware that resource demands were going to grow forever?or at least until well past retirement. That prospect underlined the need for an IT architecture that could expand smoothly, simply and indefinitely by merely plugging in identical computing units one after another. Conventional, sequential computers did not fit the picture because there were limitations inherent in depending on a single processor. Even if a system contained several chips, one would always need to act as a manager for all the subprocessors. Inevitably this top processor and its memory buffers would become overwhelmed. When that happened the IT department would need to rethink the whole system and rewrite lots of software. MPP, on the other hand, seemed to promise plug-and-play scalability over the long term.

We liked the logic, and in a July 1993 article called “Divide and Conquer,” we argued that MPP looked like a viable possibility for companies facing exceptional scalability problems. Admittedly, we noted, there were reprogramming, support, training and maintenance issues that needed to be thrashed out, but as more companies ran into the limits of conventional architectures, the market for solutions to these problems could only grow. “The cycle has begun,” we said.

There may have been some sense that this was true. Still, had we known that a couple of years later most of the product lines (and some of the companies) mentioned in the article could be dead, our tone would have been cooler. The programming issue turned out to be particularly lethal. For example, one processor requires about a hundred instructions to send a message to a second processor. There is no logic in stepping through a hundred instructions just to send one; the sending processor could save 99 cycles by simply executing that one instruction locally. As a result, messages sent between MPP processors need to carry at least a hundred instructions just to break even. Therefore, MPP programmers had to think about more than just the immediate function. Their programs had to accumulate packages of instructions for each of the processors?it’s like FedEx freight planes flying between two cities. If the packages were too large, some processors would be left waiting; too small, and they would fail to pay for the “stamp”?the processing cost of the transaction.

One of the early fantasies of MPP was that it would save money by using lots of off-the-shelf chips instead of a few specialized mainframe processors. However, given the performance hit imposed by the stamp, or “communications tax,” MPP vendors were in no position to accept a speed reduction by using low-performance chips. That meant they had to rewrite their software whenever a new chip came out. “We had to reprogram and reprogram and reprogram,” remembers Adam Kolawa. “I remember a sequential programmer shaking his head and saying, ’Adam, you’re wasting your life.’” (Eventually Kolawa agreed. He is now chairman and CEO of Parasoft, a software testing company in Monrovia, Calif.)

Still, the vision of MPP’s smooth scalability was too attractive to give up completely, and during the next several years paths began to appear through the programming jungle. One was to move the MPP concept to a new application, from computation to storage. The steadily declining cost of storage has meant that companies now routinely work with databases measured in terabytes. Many are anticipating petabyte-level stores. Given current technology, it is difficult for a system based on sequential processing to keep track of more than a hundred terabytes (for the same reason: The processor gets saturated).

In theory, parallelized storage, as offered by companies such as San Francisco-based Scale Eight, can manage an infinite number of bytes. Further, since writing a single disk takes a thousand times longer than executing an instruction, the communications tax is relatively lower and the constraints on programming somewhat relaxed. (Josh Coates, founder, CTO and acting CEO of Scale Eight, points out, however, that managing a single disk image spanning many hundreds of terabytes raises issues almost as demanding, such as making sure that all the writes that follow from a single data entry get made at the same time.)

In the mid-1990s, database vendors such as IBM and Oracle began to introduce relational databases that ran on a small number of processors, such as four or eight. While not massively parallel in the original sense?those systems had contemplated thousands of processors, and sometimes tens of thousands?these “modestly parallel” systems presented many of the same development issues, but in a smaller and more manageable form. Hardware companies began selling systems optimized for these applications. As the software and hardware grew together, more companies began finding ways to implement parallel processing of commercial data.

Step by step, this evolution is gradually hauling the IT world into more “moderately parallelized” environments. Four- and eight-processor boards are near commodities, allowing vendors to slap together fairly large machines for low prices. Torrent and Ab Initio have managed to encode the intelligence needed to write programs for dozens of processors into graphical tools, radically lowering the level of expertise needed to create multiprocessing applications. Rod Walker, president and COO of Knightsbridge Solutions, a Chicago-based systems integration company, says Knightsbridge finds itself using the Ab Initio software in more than half its implementations. “We have seen data processing times increase by 10 times, 20 times and sometimes 30 times,” he notes. Given numbers like those, perhaps this time the cycle really has begun.