READER ROI * Learn what infrastructure decisions are important in dealing with data overload * Hear how MetLife’s organizational structure helps it corral data * Discover some of the top data-related issues for 2001 Carol stewart has watched data pile up like ominous digital clouds on the information horizon. And as vice president of data administration at the Metropolitan Life Insurance Co. (MetLife) in New York City, it’s her job to predict when those clouds will burst and to find ways to disperse them before they overwhelm MetLife’s systems. Stewart supervises a group of 180 experts who oversee the insurance giant’s nearly 600 production databases and who work on all of MetLife’s data and database applications. With the constant need for more and better customer information, Stewart has seen MetLife’s data-handling needs grow explosively. The number of MetLife production databases has more than doubled in the past three years, and Stewart’s data team has grown by more than 60 percent (with each staffer supporting 80 percent more data applications) in the same period. With no slowdown in sight, Stewart feels MetLife has weathered the storm primarily by adding a new messaging architecture layer to the enterprise and by accurately forecasting database needs and planning accordingly. Stewart also cites the formation of a centrally organized team of database experts with varied backgrounds as key to MetLife’s data success story. CIO: how do you foresee a potential data overload coming? Stewart: One of the chief indicators would be an increasing demand from people in the business community for information–when you see the same information in your organization being copied or replicated to service a number of different functions. When you say, “Wait, I’ve just set up 45 different databases that all fundamentally hold the same information,” then you know you need to start thinking more architecturally in terms of enterprise strategy. What’s also happening is that you may have started capturing a lot of data, but the real overload comes in figuring out how to organize it and get it back to the people who need it in order to make decisions. Also, some data projects that start out on a small scale can become overloaded because pressure comes to grow them fast. If they weren’t originally built to scale up or handle a massive load, it can get overwhelming. What trends today are causing organizations to change how they deal with data handling and storage? Improved customer service–which everybody, every company in America, is after–requires that we maintain more information about our customers. The bar is being raised on customer service in all industries. Customers expect a level of service that probably wasn’t available four or five years ago. Also, marketing departments are looking for more information about customers and potential customers. This means more information has to be kept online–not back in old-fashioned microfiche–and more historical information needs to be kept as well because we need to know what’s been happening over the last time period in a way that we didn’t need to in the past. How important is speed? The speed with which IT or businesses need to be able to respond to opportunity has jacked-up exponentially in the past few years. When I started in this business, it was absolutely fine for a businessperson to come along and say, “I think we need to do such and such a study,” and somebody would disappear into the tape archives and come back six months later and say, “We got the data!” That’s absolutely unacceptable at this point. Now, that information needs to be there fast, preferably immediately. What other factors affect data architecture and infrastructure plans? I think the major indicators that are going to tell you which way you need to go are the latency requirements–that is, how much of a delay there is in accessing data. If the business purpose of this joining of information requires zero latency, in other words immediate access and responsiveness, then you can’t, for example, do data warehousing because warehousing by definition has a fair amount of delay associated with it. But if the business purpose requires heavy analytics, then you’re definitely in the warehousing space. It tends to be one or the other. You don’t usually find a business need for heavy analytics going along with a zero latency data. How do you recognize what the business requirements will be? Within the IT group at MetLife we have a governance board that consists of the CTO and line-of-business CIOs who get together fairly often. It becomes very obvious as people’s concerns or programs hit that table where the trends are and what’s coming. So I would say that it tends to be through the process of discussion. When we see multiple businesses asking for the same sort of thing, it becomes really clear that we’ve got something important going on here. For example, the idea of 24/7 availability of major information storage–that’s a demand clearly driven by our increasing presence on the Internet. There are a number of different strategies that you can take to solving something like the 24/7 problem. And at the end of the day, much of it comes down to two questions: Are we really talking 24/7? Maybe we want to look at some other ways of achieving the business objective that are less costly, be it 23/7 or 24/6. The second question is: What are we prepared to pay for it? There are occasions when you do in fact have to get to 24/7, and there’s a cost associated. So there’s always a discussion about value. What’s been your approach to 24/7 at metlife? We’re putting in place what we call our message box. It’s a very ironclad, high-availability piece of infrastructure that allows unencumbered messaging–an infrastructure layer that makes it much easier and cheaper for applications to communicate with each other–with guaranteed delivery across all the various applications and platforms. This is a robust, highly available infrastructure component that is easy for the various applications to jump onto. The key here is that it helps contain our costs because when you spread it across a number of different functions it makes the costs associated with it less expensive. In many cases, it also allows us to deliver the application function to our business units and hence, to the marketplace that much more quickly. How do you pitch the idea that there is a return for this kind of investment? Or the notion that something almost as esoteric as a universal- messaging layer can really pay off? It was relatively easy to look at the message system costs and average those out and say to the business: “Now, an average kind of application that does this costs you X. If we were to build a robust infrastructure, not only will you get the advantages of speeding time-to-market, but this is going to pay for itself by the time you’ve built three average applications.” This particular system was not a difficult sell at all because a number of the line-of-business CIOs were very familiar with the kind of messaging work going on at the time in their own shops and understood because they were privy to some of the bills what was involved in the cost. So the CIOs were convinced there would be a payoff in terms of cost savings and efficiencies. Metlife’s type of central data administration group sounds unusual. We have 180 people with three distinct skill sets: database administrators, middleware specialists and data analysts. We consider ourselves the center of excellence around all things associated with data and application integration because the way you get applications talking to each other is by moving data around in some form. A lot of the business literature and management consulting groups will say that they consider this kind of organization a best practice. But having said that, I haven’t actually seen it anywhere else. The value in it is severalfold. First, it provides a sort of vertical integration from the base capabilities that you have to have in order to support production databases in middleware, up through the design of those databases, up through information architecture. You can get a view across the entire enterprise from a central spot. So, there’s a lot of value there. The second real, bottom-line piece of value is that middleware and database and even data design specialists are very expensive and in high demand. So, when you run a central group, you get to leverage those expensive resources more broadly across the enterprise with just about no downtime. What issues are you focusing on in the coming year? One that I think is coming to the fore–and is again being driven by our exposure to customers over the Internet–is data quality. [See “Wash Me,” CIO, Feb. 15, 2001.] It’s an endemic problem: As you get more data, and as you have to drive the latency down, you’ve got less time to fix it. You also need to share that information across wider and wider parts of your organization because the user may not be in the department that collected it in the first place and therefore doesn’t understand the quirks [such as abbreviations or shorthand]. I think this problem has become much more visible and needs more serious attention than it probably has in the past. What other issues are you focused on? XML. If you’re in B2C or B2B, you’d better be focused on XML. There are some things about it that are very attractive; there are some things that are also quite scary. For example, it sucks up an awful lot of bandwidth and computing time as you’re stripping the stuff apart. Also, which xml should we pay attention to? There are a million of them out there. How do you reconcile all of these standards? Do you need some sort of thesaurus that will allow you to translate from one standard to another on the fly? At the end of the day, you probably do. Well, guess what? It doesn’t exist. Do we build it? Do we wait for it to be built and buy it? Do we just fiddle around with the concept this year? There are a lot of those questions in the XML space. Is there a theme to these questions, to these plans? Customer service again. I guess that’s my theme for today and for the year. In order to provide the service level that clients expect, we need to keep available much more information about the services that we’re currently providing and about the communications that we’ve had with them in the past. That’s what’s driven us in the past and will continue to push us this year. Related content brandpost Sponsored by Freshworks When your AI chatbots mess up AI ‘hallucinations’ present significant business risks, but new types of guardrails can keep them from doing serious damage By Paul Gillin Dec 08, 2023 4 mins Generative AI brandpost Sponsored by Dell New research: How IT leaders drive business benefits by accelerating device refresh strategies Security leaders have particular concerns that older devices are more vulnerable to increasingly sophisticated cyber attacks. By Laura McEwan Dec 08, 2023 3 mins Infrastructure Management case study Toyota transforms IT service desk with gen AI To help promote insourcing and quality control, Toyota Motor North America is leveraging generative AI for HR and IT service desk requests. By Thor Olavsrud Dec 08, 2023 7 mins Employee Experience Generative AI ICT Partners feature CSM certification: Costs, requirements, and all you need to know The Certified ScrumMaster (CSM) certification sets the standard for establishing Scrum theory, developing practical applications and rules, and leading teams and stakeholders through the development process. By Moira Alexander Dec 08, 2023 8 mins Certifications IT Skills Project Management Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe