It's never a good sign when a company identified as a good case-study subject has to put off an interview because of a sudden IT crisis. In the case of Loro Piana USA, the U.S. operation of Italian haute cashmere and wool couturier Loro Piana, S.p.A., the sudden problem had nothing to do with the virtualization, 10-gigabit Ethernet or storage-area networking projects CIO.com had called to talk about. \nAn electrical circuit-breaker in the company's Stafford Springs, Conn. headquarters tripped and an uninterruptible power supply missed its cue to supply emergency backup power, causing one of the company's two physical servers, and most of its 40 or so virtual servers, to blink out. \n"A good chunk of the room went quiet," says Loro Piana USA IT Manager Aaron Martin, who heard the change from his office right outside the server room and came to investigate. "You never want to walk into a server room that's quiet. There's something really wrong when that happens." \n"Amazingly," as Martin described it, the other physical server\u2014a normal production server running VMware's ESX virtualization software, including its disaster recovery capabilities&mdash:relaunched all the VMs and applications that had been running on the crashed machine. \nIn less than 15 minutes all the applications were running again without having lost any data, though the effect was spoiled because the single array that stores all the company's data was on the failed UPS, so none of the file systems were available, Martin says. It was overloaded, so the response time was slow, and the data stores for the five Exchange servers that had been running on the downed server were all corrupted during the crash. Even after rebooting the downed server, switches and the company's primary storage array, which had all been connected to the failed UPS, the Exchange servers were not behaving. \nThat's one of the major risks of consolidating down to a small number of servers, according to Chris Wolf, analyst at The Burton Group. "I'd be concerned for any organization clustering with just two nodes," Wolf says. "If one went down, you'd have all your VMs on one server and you'll be sweating bullets until you can get that other node replaced or repaired." \nLoro Piana actually has other VMware servers running in its New York office\u2014and Martin plans to link the two offices so the company's production servers can double as failover or disaster-recovery targets. But first he has to complete a switchover in telecommunications providers from Verizon to Paetec, which saved so much money on a network connecting 20 stores to the New York and Connecticut offices that Martin was able to upgrade from 1.5 Mbit\/sec Multiprotocol Label Switching (MPLS) connections to 20Mbit\/sec. \nWithout that additional speed, and the second data array in the New York office, it's not practical to rely on the two offices to back each other up in real time. The switchover is due to happen the first week in December, however, and the two-way failover link should be functioning soon after, Martin says. \nEven so, Wolf says, even many small companies with tight resources find it safer to buy an additional server they can use as a staging area, or development and testing server, and rely on that for disaster recovery. Using a production server could degrade the performance of the servers being pressed into service as backups. \n"The issue isn't just getting the servers running again, I'd also worry whether I was meeting my [service level agreements] as well," Wolf says. A New Storage Array and 10 Gigabit Ethernet Cards\nIt took a day and a half for Martin's team to rebuild the Exchange data stores and get the servers back online, but the company's other applications, networks and data were all available to the 350 or so Loro Piana workers in the company's Connecticut headquarters, New York office and 20 retail stores around the country. \nFor Martin, the crash reinforced the need for a stable, replicated, disaster-recovery-enabled network of virtual servers\u2014which he's been building toward for some time, though he actually started serious work on it with a storage array, rather than the virtualization software itself. \n\nMartin wanted to build on a fast data array that could manage both block-data access as a storage area network (SAN) does, and file access as network-attached storage (NAS) devices do. \nHe picked a $30,000 array from Nimbus Data Systems because it included a 10 gigabit Ethernet connection and actually cost less than similar units from EMC, Network Appliance and other storage companies, most of which didn't offer 10Gbit\/Sec Ethernet even for more money. \nHe uses the array as a central data store for the whole company, though there's a lower-end Nimbus in the New York office as well. The main array talks to two AMD-based servers that support 30 to 40 VMware ESX virtual machines that run all the company's applications, except one retail specialty application that remains on an AS\/400. \nMartin bought servers with two dual-core processors, but with the ability to upgrade to four quad-core chips, to support a plan to eventually host as many as 75 virtual desktops as well as the existing VMs. \nThe desktop virtualization project, which will use Citrix software and Xen hypervisors, will tax the servers more, and may require upgrades in processors or memory, but won't come anywhere near the capacity of the Nimbus array or 10Gbit\/sec Ethernet connection through which it talks to the world. \nOverbuying storage bandwidth and server capacity will actually save money in the long run, not only by extending the useful life of the hardware, but also by making room for virtual desktops and other applications that will themselves save money, Martin believes. \nThe company's desktop virtualization project is, in part, a way to avoid a coming round of hardware refreshes as laptops and desktops age, as well as consolidating data and IT costs. Being able to launch a virtual desktop from a "golden" image that doesn't change and can't be infected by an end-user's Web browsing habits saves an awful lot of IT trouble-shooting time, plus gives the company better control of its data and risks, Martin says. \nAnalysts also estimate that 10Gbit\/sec Ethernet is more than twice as fast, and cheaper by as much as a third than 4Gbit\/sec Fibre Channel, which requires special interface cards and fiber cabling. Combining block and file storage on one unit cuts costs even further, and consolidates administration onto one piece of hardware. \n"Down the road I knew 10GigE was going to be the hottest-selling ticket and I didn't want to have to forklift anything out to migrate to it later on," Martin says. "With pictures and files and all the other corporate data on one Nimbus array and right in the middle of a [virtual desktop infrastructure] project, the last thing I needed was to have everyone['s traffic] coming down to one-gig connections . Throughput is becoming the lifeblood of all my information, so it's a really big deal." \n\nThroughput is a big deal, and buying 10Gbit\/sec Ethernet to plan for the future shows solid thinking on planning architecture and capacity, Burton Group's Wolf says. Prices for 10Gbit\/sec Ethernet should drop substantially in the next six to 12 months, however, so organizations that have to buy more than two or three of the cards for high-capacity servers might do better to wait a while, Wolf says. \nNeterion 10Gbit\/sec Ethernet cards range between $1,000 and $2,000 for the most part, depending on features, which is substantially more expensive than 1 Gbit\/sec Ethernet cards, though far cheaper than Fiber Channel. \nMartin plans to increase data throughput even more by upgrading the 12,500-Gbyte disks in the array from 7,200 RPM to versions that run at 15,000 RPM and, eventually, to solid-state drives that store data in RAM chips rather than spinning disks. \n"Pairing solid-state drives with 10GigE, the sky's going to be the limit," Martin says. "The limit is only the network speed again. The bigger story has been my finding this storage array that could handle all I needed in one box and VMware being able to support it. When I was able to plug those Neterion cards directly into the ESX boxes, I was like 'oh my god, this is fantastic.' The end users saw an immediate effect."