Years ago, processing on the mainframe was fairly predictable. All day Monday you ran transactions, which peaked a couple times a day. Then you reconciled everything in big batch runs. The daily cycle repeated itself, and on the weekends, you had ample time to do maintenance. \nWhether or not we realized it at the time, it was a relatively simple and orderly world. \nBut nights and weekends aren\u2019t what they used to be. Today the mainframe staff must be able to perform batch runs, backups and restores, and other maintenance work\u2014in spite of 24\/7 transaction processing. MORE ON MAINFRAMES Read Mainframes Under Fire, also by Ralph Crosby.\nThe mainframe must be able to accommodate not only the normal peaks and valleys of the business, but also the not-so-normal peaks and valleys. Largely because of the Web, your customers can respond to world events in near-real time, occasionally swamping your systems with unprecedented transaction volumes.\nBecause of all this, most large businesses today are interested in increasing the \u201cagility\u201d of their mainframes. The trendy word agility is just a new term for an old idea\u2014the ability to adapt to change. Of course, the truth is that IT has always needed to maintain the adaptability of the mainframe. What is new today is the intensity of the need. \n\nRunning IT as a Business \nMany companies that have re-engineered their processes on the business side to gain better economies of scale and higher levels of productivity are beginning to recognize that they\u2019ve neglected the IT side. \nIn fact, with IT budgets holding steady, solely relying on increases in hardware, software and staff isn\u2019t always an option. To meet economies of scale and realize higher productivity, IT teams need to instead focus their efforts on improving IT processes. \nWell, you may say, productivity was the reason for creating IT in the first place\u2014IT provides automation that makes other departments more productive. \nJust proving that your mainframe saves money for the company is not enough anymore. You also must prove that IT is helping to bring in new revenue. Now is the time to look at IT itself as a business and run it as a business, to meet constantly changing demands. It is time to make IT more productive, more scalable and ultimately, more adaptable. \nAnd when it comes to this issue, the mainframe is more critical than ever. Today, your mainframe must support new applications, including new types of applications that don\u2019t look or behave like traditional mainframe applications. \n\nWhat\u2019s Standing in Your Way?\nYears ago, you had a limited number of mainframe applications. You could map them all\u2014they were discrete, finite and predictable. After testing an application and measuring capacity and performance, you could fine-tune the application to the point where any mainframe system programmer could define all the transactions that could occur within the application. \nToday opportunities won\u2019t wait for IT to develop applications in the classical way. You can\u2019t afford to send experts to analyze each subsystem to discover if a problem exists, or find the problem through the elimination of all other possibilities. These new dynamics require a holistic approach to IT. You must focus on a problem in the context of the whole data center and resolve it rapidly. \nBut several challenges are facing IT departments when it comes to creating that kind of nimble, adaptable organization: \nThe continued use of old IT processes and tools.As the industry changes, so should the tools. Using nothing more than traditional methods can cause an IT manager to become frozen in his or her tracks, unable to determine how an outage truly affects the business. You may unwittingly solve less-important problems before more-important ones\u2014and occasionally cause frustration or anger on the business side.\n\nThe attrition of deep mainframe experts. In a typical data center, there are several deep mainframe experts, or experts who have unique subject expertise. No other staff member knows what these experts know, and no one can backfill them. If these experts are ever out of the office recuperating from an illness or accident, it may be literally impossible for anyone to cover for them effectively. When they take other employment or retire, the knowledge loss can disrupt IT and the business. \n\nThe need to accommodate business growth without adding IT staff. The inability to scale and cross-train your staff can impede adaptability. Some people are trained in one silo and can\u2019t be cross-utilized in another. Others don\u2019t currently have the skills to work in the mainframe silo at all. But contrary to popular wisdom, the challenges here are not so much a matter of the skills possessed as the organization\u2019s ability to implement consistent processes across the entire IT environment. Processes are often ill-defined, and many that could and should be automated are performed manually, and inconsistently followed. By creating efficient processes that map back to the business, organizations can overcome some of the challenges associated with limited staff.\n\nThe need for a common IT methodology for managing complex applications. Many applications today pass through multiple processing platforms. IT cannot manage these applications with fragmented methods. Often, for example, transaction performance may become unacceptable and yet every group within IT claims (correctly) that its system is running within the required service level. The result, of course, is time wasted on conference calls\u2014while the business can lose hundreds of thousands or even millions of dollars per hour.\n\n\nA Solution: Evolving Process Automation \nMainframe automation is nothing new, but the changing environment signals there are still more obstacles to tackle. Fortunately, technology can help you overcome these obstacles and extend your mainframe. The primary technology that\u2019s useful in this regard is \u201cadvisor\u201d technology, offered by vendors of systems and data management solutions. \nWhen advisor technology is built into tools, it can intelligently automate many tasks and processes, optimize performance, increase staff productivity and scalability and reduce errors.\nAdvisor technology can help your people decide if and when to run jobs, and often can automatically run the jobs for them. Because it eliminates so much mundane work, the technology raises the productivity of mainframe staffers enabling them to focus on more valuable, innovative projects. It also allows people who are not experts to run jobs reliably. As a result, your people can be effective beyond a single platform and to some extent can become more interchangeable\u2014in other words, your humanpower can become more scalable.\nAnother major contribution of advisor technology is in scheduling. A stock exchange, for example, can\u2019t open in the morning until it completes a large number of time-consuming overnight batch jobs consisting of hundreds of job steps. Software that includes advisor technology can look at all the job steps, identify the critical path, and determine\u2014based on history\u2014how long a job will take. It not only finds the problem, but is able to determine where the impact will be felt by alerting the IT staff if the job stream is running slower than normal and a critical completion deadline will be missed. With scheduling software, the staff knows if all the jobs can be run before the morning session even begins.\nThe software also can look at the batch window and determine, based on dependencies and interdependencies, the best order for executing jobs: how many can be run in parallel, which ones have to be serialized and so on. Manually, this analysis would be a nightmare.\nOf course, your deep experts may never be interchangeable. For example, you may always need a deep expert on hand who can repair a failed IMS database or fix a storage problem. But the general staff can triage problems and call in experts when needed\u2014conserving the experts\u2019 time.\nAutomation also can help you add mainframe skills to your staff. Since younger technologists are usually not enamored with the traditional \u201cgreen screen,\u201d it is becoming difficult to attract graduates to, or cross-train existing staff on, the mainframe. \nHere again, the software can help. Some vendors offer a Graphical User Interface (GUI) that gives your new people a familiar format in which to manage and resolve mainframe problems. Without a GUI, no single human being could keep up with the volume of alerts and exceptions that occur in a typical day. But with a GUI, you can boost staff efficiency, minimize errors and groom new expertise.\nThe right technology can dramatically improve the efficiency of your processes, improve system performance, support 24\/7 availability\u2014and even fix things on the fly. You can do much more with the same people. \nWhen people and expertise can be automated to this level, it makes mainframe operations much more adaptable to business change. With increased adaptability in IT, the business can rapidly introduce new services and generate new revenue.\n(For more on mainframes, see Mainframes Under Fire, also by Ralph Crosby.)\nRalph Crosby is the chief technology officer for the Mainframe Service Management Business Unit at BMC Software, responsible for setting the strategic direction for the entire portfolio of IBM Mainframe products. Prior to assuming the CTO role, Crosby was DB2 architect responsible for technology directions across the full line of DB2 products. He has authored several products in the DB2 product line and also worked as an architect in the storage management area. Prior to joining BMC Software, he worked as a database administrator, applications architect and systems programmer in a variety of environments centered on the IBM mainframe, but including Windows and UNIX platforms. Crosby holds a BS in computer science from the California Polytechnic State University and an MBA degree from Fordham University.