Symantec Corp. has released the results of its fifth annual Global IT Disaster Recovery survey. Symantec Corp. has released the results of its fifth annual Global IT Disaster Recovery survey.According to the report, 93 per cent of organizations have had to execute their disaster recovery plans and the average cost of implementing DR plans for each downtime incident is US$287,000. The medium cost in Canada is US$496,500. The average budget for disaster recovery initiatives worldwide is US$50 million.Response within Canada reflected those of the worldwide results, but percentages were noticeably different in terms of virtualization backup practices. Only 10 per cent of Canadian respondents do not back up data on virtualized systems, compared to 36 per cent of worldwide.“The more stringent requirements in general were in North America,” said Dan Lamorena, senior manager of high availability and disaster recovery solutions at Symantec. Overall recovery times are faster and the cost of downtime is higher in Canada and the U.S. when compared to other countries surveyed, he noted. The average time it takes to “achieve skeleton operations after an outage” is three hours. To be fully “up and running after an outage,” the average is four hours, states the report.The trends reflected in Symantec’s report generally mirror those of Info-Tech Research Group Ltd.’s mid-sized enterprise customer base, according to Darin Stahl, lead analyst at the London, Ont.-based firm. Executive-level involvement in DR plans is rising. In 2007, 55 per cent of respondents reported DR committees involved the CIO, CTO or IT director; this dropped to 33 per cent in 2008. The number rose to 67 per cent in 2009, according to the report. Symantec attributes the rise to DR “becoming a competitive differentiator” and other factors including the size of DR budgets and the impact on customers.The increased level of executive involvement is a significant issue, Stahl noted. When executives are not involved at a real active level with DR planning and business impact analysis (BIA), the IT group will often build an over-engineered plan, he said.“You get this sort of notion from the business that everything’s critical … because they’re not going to assume that something is not critical. They’re not going to second-guess that maybe off-the-cuff comment from the executive,” said Stahl.Info-Tech notices a downward trend when executives get involved in the BIA and see how those costs line up, he pointed out. “The more structured that conversation takes place, the more a detailed methodology is followed, the likelihood that they’re going to achieve an optimal state of alignment and costs,” he said.Recovery time objectives fell from five hours in 2008 to four hours in 2009. “In 2009, 75 per cent of tests were successful, more than doubling the 30 per cent of tests that met RTO objectives in 2008. While this rate also parallels executive involvement, they may or may not be correlated,” states the report.One in four DR tests fail. This figure marks an improvement, however, when compared to previous years. In 2007, 50 per cent of DR tests failed. The number dropped to 30 per cent in 2008 and 25 per cent in 2009, according to the report. “Only 15 per cent say that tests have never failed,” states Symantec. “Although this is good news, one test failure in four is still alarmingly high.” But the number doesn’t alarm Stahl. “Tests are meant to fail … it’s not alarming unless I’m getting to the point where customers are actually trying to recover and failing. That mean’s they’re not testing, doing that remediation cycle through their DR,” he said.“DR is a living thing. The infrastructure is continually changing and morphing and it would be unreasonable to expect enterprises to be 100 per cent on the test year after year. If they are, that means they’re probably not doing anything else in the infrastructure of the business,” he said.Reasons cited for test failures included staff errors (47 per cent), technology failure (40 per cent), inappropriate processes (37 pr cent) and out of date plans (35 per cent), states the report. Insufficient technology, which ranked third on the list of reasons for test failure in 2008, dropped to fifth place this year, notes Symantec.While 96 per cent of IT organizations have tested their DR plans at least once, roughly 35 per cent of organizations perform their test only once or less than once a year, according to the report. “This is 12 per cent lower (and an improvement) from the 47 per cent that reported minimal testing in 2008. However, Symantec and most IT experts believe that every organization should be testing more frequently than once a year,” states the report. While full end-to-end tests used to be norm, according to Stahl, the trend is shifting to unit tests. “What happens now is they target tests (to) applications or services where they’ve made significant changes because they just can’t sustain a full test. It’s too big, it’s too much, it’s too complex,” he said.Organizations aren’t performing more tests because of a lack of resources in terms of people’s time (48 per cent), disruption to employees (44 per cent), budget (44 per cent) and disruption to customers (40 per cent), states the report.Rob Ayoub, global program director of Network Security at Frost & Sullivan Ltd., found the testing impact to customers and revenue a “very good finding” and one of the most interesting results of the survey. “That’s one of the things at the heart of disaster recovery that doesn’t get talked about a lot,” he said.“Everyone says ‘test your plans, test your plans’ … but how do you test your plans on a real live working business without impacting your service levels?” said Ayoub. “I’m not sure anyone has a really great answer for that.”The study focused on organizations with existing plans and doesn’t ask people from organizations without DR plans why they don’t, Ayoub pointed out. “I think testing is definitely a lot of it … there are a lot of pieces that discourage organizations,” he said.Nearly one third (27 per cent) of respondents do not their test virtual servers as part of their DR plans and more than one-third (36 per cent) do not perform regular backups of data on virtualized systems, states the report.The lack of storage management tools (53 per cent), lack of backup storage capacity (52 per cent) and lack of automated recovery tools (50 per cent) were reported as the top challenges in “protecting mission-critical data and applications in virtual environment.”One of the most significant points raised in the survey, according to Lamorena, are the issues with virtualization. “As people are becoming more familiar with the technology and they are moving more mission-critical applications to these environments, they are encountering some of the challenges and are starting to look at what solutions are really going to help deal with this more complex virtual environment,” he said.Based on the survey findings, Symantec recommends organizations curb the costs of downtime by implementing more automation tools that minimize human involvement, reducing the impact of testing on clients and revenue by implementing non-disruptive testing methods, and including those responsible for virtualization in disaster recovery planning.There are a lot of automation solutions available for disaster recovery that include high availability clustering, monitoring the health of applications, automating the startup of applications at the data recovery site and reprovisioning servers, Lamorena pointed out.“In the reality of this 24/7 economy and increasing business requirements, we think people are going to look at more automated solutions … the biggest resource you struggle to find is the people. In a real disaster, no one wants to be leaving their homes to make sure their data centre is up and running,” he said.Configuration health checks, also known as aggregators, are one non-disruptive method recommended by Lamorena. “This isn’t a testing tool, but it gives you real good sense of the health of the environment,” he said.“Virtual environments should be treated the same as a physical server, showing the need for organizations to adopt more cross-platform and cross-environment tools or standardizing on fewer platforms,” states Symantec. Related content brandpost Sponsored by DataStax Ask yourself: How can genAI put your content to work? Generative AI applications can readily be built against the documents, emails, meeting transcripts, and other content that knowledge workers produce as a matter of course. By Bryan Kirschner Dec 04, 2023 5 mins Machine Learning Artificial Intelligence feature The CIO’s new role: Orchestrator-in-chief CIOs have unique insight into everything that happens in a company. Some are using that insight to take on a more strategic role. By Minda Zetlin Dec 04, 2023 12 mins CIO C-Suite Business IT Alignment opinion Fortifying the bridge between tech and business in the C-suite To be considered a tech-forward company today, there has to be a focus on tech fluency across the C-suite, which creates a unique opportunity for CIOs to uplevel their roles and expand their footprint across the enterprise. By Diana Bersohn and Rachel Barton Dec 04, 2023 7 mins CIO Business IT Alignment IT Strategy brandpost Sponsored by G42 Understanding the impact of AI on society, environment and economy By Jane Chan Dec 03, 2023 4 mins Artificial Intelligence Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe