READER ROI\n* Learn what infrastructure decisions are important in dealing with data overload\n* Hear how MetLife\u2019s organizational structure helps it corral data\n* Discover some of the top data-related issues for 2001Carol stewart has watched data pile up like ominous digital clouds on the information horizon. And as vice president of data administration at the Metropolitan Life Insurance Co. (MetLife) in New York City, it\u2019s her job to predict when those clouds will burst and to find ways to disperse them before they overwhelm MetLife\u2019s systems. Stewart supervises a group of 180 experts who oversee the insurance giant\u2019s nearly 600 production databases and who work on all of MetLife\u2019s data and database applications. With the constant need for more and better customer information, Stewart has seen\nMetLife\u2019s data-handling needs grow explosively. The number of MetLife production databases has more than doubled in the past three years, and Stewart\u2019s data team has grown by more than 60 percent (with each staffer supporting 80 percent more data applications) in the same period. With no slowdown in sight, Stewart feels MetLife has weathered the storm primarily by adding a new messaging architecture layer to the enterprise and by accurately forecasting database needs and planning accordingly. Stewart also cites the formation of a centrally organized team of database experts with varied backgrounds as key to MetLife\u2019s data success story. \n\n \n\n\n\n\nCIO: how do you foresee a potential data overload coming?\n\n \n\n\n\n\nStewart: One of the chief indicators would be an increasing demand from people in the business community for information--when you see the same information in your organization being copied or replicated to service a number of different functions. When you say, "Wait, I\u2019ve just set up 45 different databases that all fundamentally hold the same information," then you know you need to start thinking more architecturally in terms of enterprise strategy. What\u2019s also happening is that you may have started capturing a lot of data, but the real overload comes in figuring out how to organize it and get it back to the people who need it in order to make decisions. \nAlso, some data projects that start out on a small scale can become overloaded because pressure comes to grow them fast. If they weren\u2019t originally built to scale up or handle a massive load, it can get overwhelming. \n\n \n\n\n\n\nWhat trends today are causing organizations to change how they deal with data handling and storage? \n \n\nImproved customer service--which everybody, every company in America, is after--requires that we maintain more information about our customers. The bar is being raised on customer service in all industries. Customers expect a level of service that probably wasn\u2019t available four or five years ago. Also, marketing departments are looking for more information about customers and potential customers.\nThis means more information has to be kept online--not back in old-fashioned microfiche--and more historical information needs to be kept as well because we need to know what\u2019s been happening over the last time period in a way that we didn\u2019t need to in the past.\n\n \n\n\n\n\nHow important is speed?\n \n\nThe speed with which IT or businesses need to be able to respond to opportunity has jacked-up exponentially in the past few years. \nWhen I started in this business, it was absolutely fine for a businessperson to come along and say, "I think we need to do such and such a study," and somebody would disappear into the tape archives and come back six months later and say, "We got the data!" That\u2019s absolutely unacceptable at this point. Now, that information needs to be there fast, preferably immediately.\n\n \n\n\n\n\nWhat other factors affect data architecture and infrastructure plans?\n \n\nI think the major indicators that are going to tell you which way you need to go are the latency requirements--that is, how much of a delay there is in accessing data. If the business purpose of this joining of information requires zero latency, in other words immediate access and responsiveness, then you can\u2019t, for example, do data warehousing because warehousing by definition has a fair amount of delay associated with it. But if the business purpose requires heavy analytics, then you\u2019re definitely in the warehousing space. It tends to be one or the other. You don\u2019t usually find a business need for heavy analytics going along with a zero latency data.\n\n \n\n\n\n\nHow do you recognize what the business requirements will be? \n\n \n\n\n\n\nWithin the IT group at MetLife we have a governance board that consists of the CTO and line-of-business CIOs who get together fairly often. It becomes very obvious as people\u2019s concerns or programs hit that table where the trends are and what\u2019s coming. So I would say that it tends to be through the process of discussion. When we see multiple businesses asking for the same sort of thing, it becomes really clear that we\u2019ve got something important going on here. For example, the idea of 24\/7 availability of major information storage--that\u2019s a demand clearly driven by our increasing presence on the Internet. \nThere are a number of different strategies that you can take to solving something like the 24\/7 problem. And at the end of the day, much of it comes down to two questions: Are we really talking 24\/7? Maybe we want to look at some other ways of achieving the business objective that are less costly, be it 23\/7 or 24\/6.\nThe second question is: What are we prepared to pay for it? There are occasions when you do in fact have to get to 24\/7, and there\u2019s a cost associated. So there\u2019s always a discussion about value.\n\n \n\n\n\n\nWhat\u2019s been your approach to 24\/7 at metlife?\n \n\nWe\u2019re putting in place what we call our message box. It\u2019s a very ironclad, high-availability piece of infrastructure that allows unencumbered messaging--an infrastructure layer that makes it much easier and cheaper for applications to communicate with each other--with guaranteed delivery across all the various applications and platforms. \nThis is a robust, highly available infrastructure component that is easy for the various applications to jump onto. The key here is that it helps contain our costs because when you spread it across a number of different functions it makes the costs associated with it less expensive. In many cases, it also allows us to deliver the application function to our business units and hence, to the marketplace that much more quickly.\n\n \n\n\n\n\nHow do you pitch the idea that there is a return for this kind of investment? Or the notion that something almost as esoteric as a universal- messaging layer can really pay off?\n \n\nIt was relatively easy to look at the message system costs and average those out and say to the business: "Now, an average kind of application that does this costs you X. If we were to build a robust infrastructure, not only will you get the advantages of speeding time-to-market, but this is going to pay for itself by the time you\u2019ve built three average applications." This particular system was not a difficult sell at all because a number of the line-of-business CIOs were very familiar with the kind of messaging work going on at the time in their own shops and understood because they were privy to some of the bills what was involved in the cost. So the CIOs were convinced there would be a payoff in terms of cost savings and efficiencies. \n\n \n\n\n\n\nMetlife\u2019s type of central data administration group sounds unusual.\n \n\nWe have 180 people with three distinct skill sets: database administrators, middleware specialists and data analysts. We consider ourselves the center of excellence around all things associated with data and application integration because the way you get applications talking to each other is by moving data around in some form.\nA lot of the business literature and management consulting groups will say that they consider this kind of organization a best practice. But having said that, I haven\u2019t actually seen it anywhere else. The value in it is severalfold. First, it provides a sort of vertical integration from the base capabilities that you have to have in order to support production databases in middleware, up through the design of those databases, up through information architecture. You can get a view across the entire enterprise from a central spot. So, there\u2019s a lot of value there. The second real, bottom-line piece of value is that middleware and database and even data design specialists are very expensive and in high demand. So, when you run a central group, you get to leverage those expensive resources more broadly across the enterprise with just about no downtime.\n\n \n\n\n\n\nWhat issues are you focusing on in the coming year?\n \n\nOne that I think is coming to the fore--and is again being driven by our exposure to customers over the Internet--is data quality. [See "Wash Me," CIO, Feb. 15, 2001.] It\u2019s an endemic problem: As you get more data, and as you have to drive the latency down, you\u2019ve got less time to fix it. You also need to share that information across wider and wider parts of your organization because the user may not be in the department that collected it in the first place and therefore doesn\u2019t understand the quirks [such as abbreviations or shorthand]. I think this problem has become much more visible and needs more serious attention than it probably has in the past.\n\n \n\n\n\n\nWhat other issues are you focused on?\n \n\nXML. If you\u2019re in B2C or B2B, you\u2019d better be focused on XML. There are some things about it that are very attractive; there are some things that are also quite scary. For example, it sucks up an awful lot of bandwidth and computing time as you\u2019re stripping the stuff apart.\n\n \n\n\n\n\nAlso, which xml should we pay attention to? \n \n\nThere are a million of them out there. How do you reconcile all of these standards? Do you need some sort of thesaurus that will allow you to translate from one standard to another on the fly? At the end of the day, you probably do. Well, guess what? It doesn\u2019t exist. Do we build it? Do we wait for it to be built and buy it? Do we just fiddle around with the concept this year? There are a lot of those questions in the XML space.\n\n \n\n\n\n\nIs there a theme to these questions, to these plans?\n \n\nCustomer service again. I guess that\u2019s my theme for today and for the year. In order to provide the service level that clients expect, we need to keep available much more information about the services that we\u2019re currently providing and about the communications that we\u2019ve had with them in the past. That\u2019s what\u2019s driven us in the past and will continue to push us this year.