by Gary Beach

Publisher: Napster Shows Power of P2P

News
Oct 15, 20003 mins
Consumer Electronics

Have you ever heard of Shawn Fanning? Probably not. But what about Napster, the Redwood City, Calif.-based company that gave Generation X the ability to trade music files from one PC to another over the Internet? Loved by the masses and loathed by the powerful—most notably the Washington, D.C.-based Recording Industry Association of America and the White House—Mr. Fanning and his Napster cohorts have appropriately turned the world of computing on its ear much the same way tech teenagers led by Marc Andreessen did with Netscape in 1994.

Napster is way cool and may eventually prove to be illegal. But its legacy contribution to your world of computing may be putting the spotlight on a tried—but relatively unsuccessful—mode of computing called peer to peer (P2P).

Andy Grove, chairman of Intel, a company that stands to benefit significantly if P2P catches on with CIOs, recently proclaimed “the whole Internet could be rearchitected by Napster-like technology.” That’s high praise from someone who knows.

Grove, in a sense, is right. Napster-spawned P2P technology could lead to the next wave of computing. Or the wave could crash. Go back 50 years. Pick your platform—mainframe, client/server, Internet, thin-client. All were based on architectures featuring centralized information repositories.

A P2P computing model has the potential to replace that model. So what exactly is it? P2P computing allows client computers to bypass traditional database stores and exchange data directly client to client, hence peer to peer. P2P puts the PC back at the center of the computing world, highlighting the need for more powerful processors on every desktop. This is why the concept appeals to Grove (and probably to Bill Gates too). Servers become secondary components in your network.

Why P2P? Most desktop computers are very powerful machines. Yet up to 97 percent of a typical machine’s power is dormant. P2P aims to use that latent power and spread it around an organization via agents that poll computers querying whether or not they have available processing power to do a job. Sounds logical. But it is not easy to implement, control or build securely. Plus, most applications do not lend themselves to the self-decomposition actions that P2P demands.

In theory, the collective information of the thousands of corporate computers in your infrastructure becomes your competitive advantage. An advantage, I may add, that you, the CIO, will have a very difficult time controlling.

Proponents of P2P will tell you the model results in more efficient distribution of corporate information because users aren’t tying up central servers for information requests and downloads.

But P2P has a dark side. P2P is essentially a socialist computing platform whose existence and survival relies largely on trust. At heart I am an optimist, but I truly question if users can be trusted in P2P deployments. Can users be trusted to securely transfer information? Most can, but it only takes one high-profile security meltdown to bring the P2P platform crashing down all around you.

On paper, P2P is cool and is surely a way to build up your music collection. But for CIOs trying to build secure electronic businesses, P2P adds up to one enormous and unmanageable headache. Stay away from it.