The demise of the mainframe has been predicted since around 1981, but IBM is hoping to put an end to this discussion once and for all with the launch of its zEnterprise mainframe server.\nThe company\u2019s latest mainframe is described by Mark Anzani, IBM\u2019s Vice President and CTO of System Z, as being a \u201cnew workload optimised system.\u201d Reflecting upon the new offering he says: \u201cIt is going to shift the conversation towards who can offer the best deep management of the infrastructure, and also towards the company that can drive the broadest economic benefit.\u201d\nSo the mainframe is here to stay, and Anzani believes that it\u2019s now more of a question of cross-platform workload economics, claiming that customers now have a greater choice about the platforms that they employ. That\u2019s because zEnterprise allows mainframe customers to centralise the management of workloads that operate on different platforms and architectures, such as Java and Linux. Its introduction is therefore certainly welcomed by many industry experts. There is definitely a hope that the new system will deliver what Anzani describes as \u201cintegrated value\u201d.\nIBM is working hard to address a key customer concern with its new offering to ensure that the mainframe has a future: the perceived high cost of running, maintaining and operating mainframes.\n\u201cWe have delivered technologies to make the platform more efficient, using a broad set of workloads,\u201d explains Anzani, before adding that the conversation now needs to turn towards where a particular workload or set of workloads fit best. For example, in the past they have been shifted to mainframes and away from them onto servers. Customers need the ability to move them around whenever deemed necessary, perhaps by distributing the workloads between the two systems.\nzEnterprise deserves praise\n\u201cBased on what I was discussing with IBM on the launch day of zEnterprise, I would say that it deserves to have a massive impact,\u201d argues Clive Longbottom, head of research at analyst firm Quocirca.\n\u201cWhen you bring the ZBX box together (the Blade enclosure), you have an emerging cloud environment, and the software that comes with it makes it self-learning.\u201d In other words the automation of workload management can make sure that the right workload can be used by the right platform.\n\nNevertheless, and even though zEnterprise\u2019s capabilities are seen as the next big step in the ongoing evolution of the mainframe, Longbottom says that customers were certainly not overwhelmed by it. Someone listening to IBM\u2019s sales pitch commented: \u201cOK, it\u2019s the next mainframe\u201d. This lacklustre response implies that there needs to be a more persuasive argument about why people should migrate to the mainframe, or employ it as part of a hybrid IT architecture.\nMainframe misconceptions\nThe trouble is that there are still many folk out there who think of the mainframe as a dinosaur of technology. Nevertheless, he thinks that IBM\u2019s main challenge is to attract customers who didn\u2019t buy it in the past for this reason. The view that the mainframe belongs to the 1970s is disproven, according to Mark Settle, CIO of BMC Software. Analyst firm Aberdeen Group has discovered that 70 per cent of the world\u2019s data is still processed on the mainframe.\n\u201cMainframes are often hiding under the covers; they are behind the scenes doing a lot of the supply chain rebalancing, the settlement processing in the financial work and wherever you have massive data volumes,\u201d he explains.\nYet there are still a number of servers and distributed systems and mainframe die-hards. Either way it\u2019s important to remember that the mainframe is \u201ca critical component of most large data centres,\u201d he emphasises before adding that \u201cthere are a whole bunch of distributed systems that surround legacy mainframe systems to support activities like e-commerce, customer support and supply chain operations.\u201d\nSettle adds that the distributed systems are simply collecting and pre-processing transactional data \u201cthat is ultimately fed back into legacy mainframes.\u201d BMC\u2019s financial sector customers have commented that they see them as the only platform capable of securely and reliably processing vast amounts of data, and he argues that they are unlikely to change this viewpoint.\n\u201cMainframes have historically delivered higher levels of utilisation than distributed systems environments, Settle adds. So while there have been some recent advances in server virtualisation, which has enabled Wintel platforms to reach a utilisation level of 70 per cent, mainframes have routinely achieved one of 90 per cent.\nHowever, these days it\u2019s not necessarily a question of using servers within a distributed environment over mainframe systems. \u201cThere is certainly a business case now for using a hybrid environment than ever before, and that\u2019s because the modern business operation demands the scalability of distributed and highly virtualised datacentres, he explains before commenting that \u201cfor many large organisations such as those in finance or commerce, the sheer reliable processing power of the mainframe is equally as critical.\u201d\nAlthough Settle says that he doesn\u2019t know of any investments in mainframe-based cloud computing, he believes that the IBM sees zEnterprise\u2019s potential for it, and so it is building its service offerings around distributed systems.\nProfessor Bryan Foss, an independent board level advisor, thinks this is good news because businesses are \u201cincreasingly looking towards new operational funding models, including SaaS and Cloud, that provide speed and flexibility around the mainframe\u201d. He implies that business CIOs want to find other options to the traditional arguments regarding alternative approaches to servers, so the coming of age of the cloud will significantly change this debate.\nEvaluating ROI and TCO\nWhile this discussion will be reinvigorated by zEnterprise, doing more with less money and getting the most out of legacy systems remains a topic that simply isn\u2019t going to go away. There is a greater need during the recession to squeeze the most out of their existing systems, and we won\u2019t know the true impact of IBM\u2019s new mainframe until some time after it is first shipped in September 2010. Mainframe customers simply want to be able to defer their upgrades, but the additional capabilities and functionalities of zEnterprise might persuade them to change their minds. If that\u2019s the case, then perhaps now is the time to evaluate the return on investment (ROI) and total cost of ownership of their existing mainframe and distributed systems in order to uncover the benefits of zEnterprise.\nWhether an organisation implements a mainframe, server or hybrid systems environment, ROI and TCO are two of the measures that will frequently be used before making a buying decision. Server hardware vendors will often claim that they are cheaper. In some cases they are right, but not always. Rich Ptak, managing partner of analyst firm Ptak, Noel & Associates, believes that the calculation of these measures represents one of the most significant problems.\n\u201cThere has been a failure to accurately identify and allocate the costs associated with the mainframe versus distributed systems,\u201d he argues. Quite often the capital and operating expenses \u201cshould have been distributed across the systems,\u201d he reveals. Sometimes the incremental management and maintenance costs of maintaining the distributed systems network are ignored too.\nMisallocated costs\nMarcel den Hartog, EMEA mainframe marketing director at CA Technologies, says that working out the ROI calculations is quite easy as \u201cyou only get very few bills\u201d. However, he agrees that it becomes very difficult to determine them when they involve distributed expenses. \u201cWhat we are finding is that a lot of the costs are not in the right cost centre, and quite often the choices that are made are not with the right business reasons in mind,\u201d he explains before stressing that \u201cthe whole promise that distributive systems are cheaper and more flexible is simply not true, and a lot of the distributive tools from CA have the mainframe knowledge with them\u201d.\nAccording to BMC Software\u2019s Settle, the right calculation of ROI and TCO \u201cis in the eye of the beholder,\u201d although he agrees that \u201cthere is some subjectivity in the way that costs and benefits are defined, and the calculations can be misleading\u201d.\nQuocirca\u2019s Longbottom shares this view. \u201cWe tend to keep away from TCO and ROI because if you give me a spreadsheet I can prove that it is the lowest or highest expense,\u201d he says. In order to find the right calculation for these metrics, he cites a need to know the baseline for them. \u201cWhat is it that gives the best value to the organisation rather than the lowest cost?\u201d he asks.\n\u201cSo if a scaled-out distributed environment costs \u00a3100,000 but only provides you with \u00a390,000 worth of value, it should be compared with a mainframe that costs \u00a3500,000 but which delivers \u00a3750,000 of value to the organisation.\u201d\nWhen determining ROI and TCO, Settle suggests focusing on the skills required to maintain and support the different systems, their flexibility in terms of their ability to move workloads across to different platforms, and consider the costs or savings that are attributed to labour and power. One CIO he used to work with once joked that servers tend to multiply like rabbits, and then comments that the mainframe has to expose itself to these new and modern platforms to remain significant.\nNathaniel Briggs, CEO of web presence experts Synthetic Magic, measures the cost-per-transaction of the deployed technology, the financial resources required to support customers, the end-user alignment time averages, the average value per transaction and the business performance per outage. \u201cIn simple terms it\u2019s about focusing on business today and business tomorrow,\u201d he says.\nMark Anzani discloses that IBM has its own methodology for helping customers \u201cto calculate the costs of running an entire computing architecture \u2013 not just the mainframe, and in my experience TCO-based methods are the most complete as you have to consider many factors: the number of users, transaction volumes, the cost of managing individual servers, licensing costs, and so on.\u201d\nOne particular consideration that should be thought about is the one surrounding the reduction of IT infrastructure and management complexity. It\u2019s an important issue for CIOs because complexity equals cost. So they are very much focused on making IT as simple as they can, while making sure that it\u2019s not the technology that drives their decision-making, but the needs of the business. More attention will therefore be focused on the management of their systems, with the aim of ensuring that they deliver the greatest ROI or economic benefits as Anzani phrases them.\nCosts remain \u2018prohibitive\u2019\nMainframe pricing nevertheless remains a hot issue, and one that puts people off it as a viable platform. \u201cWe have our own mainframe and distributed servers,\u201d says Lacy Edwards, CEO of mission-critical mainframe experts NEON Enterprise Software. He argues that it\u2019s not just the cost of the mainframe hardware that remains prohibitive to many people, but the high expense that is attributed to software licensing. \u201cIf you look at it from the customers\u2019 perspective, that\u2019s why they are comparing the different systems, and software is the key reason why they say the cost of running a mainframe is too high.\u201d\nIn NEON\u2019s view the mainframe will only grow if these high costs are addressed. He says that IBM created the specialty processors in order to discourage people from moving away from the mainframe. \u201cHowever, many customers were disappointed in the yield versus their promise,\u201d he explains. So NEON developed zPrime in response to customer demand to find a better and cheaper way to exploit the zIIPS and the zAAPS, the specialty processors.\nLacy claims that zPrime, which has been described by some commentators as an \u2018exploitative technology\u2019, allows customers to make better usage of the specialty processors. \u201cWe are currently responsible for customers wanting to use more specialty processors, but unfortunately IBM is unwilling to sell them to our customers,\u201d he alleges while suggesting that this action could in itself force people to move off the mainframe. That\u2019s because traditional workloads are very costly, and so NEON feels duty-bound to help them to reduce these overheads.\n\u201cI am not aware of anything else that is like what zPrime is doing, and I have not personally seen any impact at this point on the way customers are making their purchases,\u201d comments Anzani. \u201cThe only product where there are objections regarding its installation comes back to the difference of opinion between NEON and IBM regarding zPrime.\u201d Although IBM usually welcomes the exploitation of the specialty processors, the company views any installation of zPrime as being unauthorised. NEON has therefore raised the question about whether customers need the authorisation of IBM to install \u2018exploitative technologies\u2019 like this.\nYet IBM does offer a free and approved API to allow software vendors to program improvements, and it usually welcomes any solution \u2013 even if it comes from a competitor \u2013 that helps to improve the efficiency of the mainframe. The question regarding zPrime is about whether its installation breaches IBM\u2019s licensing agreements. Many don\u2019t think so, but NEON\u2019s complaints are being examined by the Department of Justice in the US, and by the European Commission. It is felt by some that IBM is abusing its dominant position, but Anzani argues that there is plenty of competition within the mainframe market from a workload management perspective.\nThe dispute is still putting some customers off purchasing zPrime. Jeff Cattle, head of computer services at fashion retailer JD Williams says that his company had a look at it, but delivers a warning to the warring parties. \u201cWe won\u2019t look at zPrime again until IBM and NEON resolve their differences, and there is a long way to run on that one,\u201d he says. In the meantime his firm is \u201cmaximising the use of the features that IBM has already presented \u2013 the opportunities such as sub-capacity pricing.\u201d He adds that JD Williams would not consider any product that would put his organisation in a defensive position against IBM, but he would be willing to evaluate similar solutions that fall within IBM\u2019s guidelines.\nSecuring the future\nAs a mainframe customer he would like the next steps for the mainframe to offer more business relevant applications that can be supported on this kind of system. His firm has an eye on Linux and so it has purchased the new z196 (zEnterprise), which will be delivered to his organisation next month. His commitment to the mainframe is illustrated by the fact that his company has two mainframes: one running traditional CICS, while another hosts 50 websites using Websphere and which runs on z\/OS. It accounts for 40 per cent of the firm\u2019s order values, delivering \u201c\u00a3280m to us per year in sales,\u201d he reveals. His next steps and discussions revolve around increasing workload capacity, and that\u2019s where zEnterprise comes in. He says it has significantly increased zIIPS and zAAPS, delivering performance without increasing the software price tag.\nBy reducing the cost of the mainframe and making sure that it can interoperate with other platforms, the mainframe will be here for some time to come. However, there is another step that is being taken to secure its future. Many of the mainframe executives are described as old hacks; they have worked on the platform for decades, but most industry experts agree that the younger generation needs to become fully equipped with their skills and knowledge before it\u2019s too late. This in itself will move the mainframe a step forward, but there is also an ongoing job to be done to educate, persuade and inform decision-making executives about the mainframe facts. The mainframe lives on. It is still relevant, but its associated costs still have to continue to fall.