Commonweath Bank CIO talks cloud computing

The Commonwealth Bank of Australia CIO and group executive for enterprise services, Michael Harte, is serious about cloud computing. CBA wants to buy software and infrastructure as a service over a network and Harte sits on the Enterprise Cloud Leadership Council. In this extensive interview, he talks to CIO about his vision for cloud computing and the opportunity it presents to break vendor lock and create contestability.

What are the key elements of your cloud computing strategy?

Harte: For a couple of years we’ve been working on cloud computing. But more accurately, because cloud computing is quite amorphous, all we want to do is buy software and infrastructure as a service over a network. We only want to pay for what we use. And we only want to pay on demand. So we are looking to create standards that are open, component oriented and service oriented, so that we ‘free up’ the economics. We want to get out of infrastructure computing and into fine-grain components and highly granular data, so that our customers enjoy new services. This is not about some technical breakthrough. This is about supplying customers the services they want — and doing that at value.

So the technology becomes the means to execute the strategy?

You got it.

Is there anything you can say about the things you’re looking at and what we might actually see happen from you in the near term?

Let’s talk about it in three ways, under the umbrella of the ‘Enterprise Cloud Leadership Council’, and the three things that are being pursued there.

The first thing is the accelerated establishment of standards. If you get any big group of people together it tends to look like the United Nations. We don’t want that. We want an agile and fast forming group that can create standards. We published the first standard — ‘virtual machine capability’ in the mid range. We have X86 machines, which are your midrange servers and they’re operating on the Linux platform, and that allows you to build your own capabilities within the corporation.

So the virtual private cloud or a public cloud [is created] by using the same technology and importing some of those activities outside the corporation. We work with companies like Savvis and Amazon, and we can run application development and testing domains privately inside or publically outside the corporation, and we can scale those up within minutes. We can make them production scale. Once we’ve developed and tested those capabilities, and they’re operating at full production, we can determine whether they stay outside in the public cloud or [should be] brought back inside the corporation.

All of that’s provisioned within minutes, rather than days and weeks, and allows for a very nimble ‘pay as you go‘ service capability. And you can determine whether you want to run that securely and privately inside the corporation or whether you can get adequate security and run that outside the corporation.

Are you able to talk about any of the specific activities that you have enacted in a cloud-based model?

Let’s talk about two: The first one we developed ourselves, and in doing so, we shared with other corporations. Not just banking corporations, but companies in pharmaceutical industries, or manufacturing industries, or distribution industries — the database as a service. We’ve provisioned that on Oracle, and we have collaborated to build a really good, comprehensive stack of database services from front to back. We can provision that really quickly, and we are prepared to share it with others, to say: ‘Look, this is safe, this is secure, this is good. You guys can learn and adopt and adapt this stuff as well. You don’t have to be trying to invent this on your own.’

The second set is that there are infrastructure investments that can be made that can free up resources, so that we’re not having to tie up a whole lot of assets and activities in utility-style computing and take a long time to do it.

We’ve put .Net and ‘net apps’ on top of our own development capability. And we also have it running through servers and on top of Amazon, so we can run test and dev environments inside and outside the organisation. We’ve done that using a whole different set of development tools and testing tools, and we’ve put them on top of public infrastructure. And we’ve been able to point internal and external developers at those resources. We can provision those in under 10 minutes and we can do it at up to a tenth of the cost; so there’s great advantages for being able to provision for big projects.

What percentage of the server resources within the bank would actually be used for test and dev at the moment?

I think it can sometimes be up to around 40 per cent of all the compute.

Which obviously wouldn’t all be moved out of the organisation, but there’s a huge amount of scope to play with, isn’t there?

That’s right. We are not doing it to try and move everything to the cloud. A lot of people think: ‘They make a generalised cloud statement and they’re trying to put everything in the cloud’. That’s a little bit superficial. There are just certain types of activity that we would move and then create value by getting that machine arbitrage at lower costs. That then frees up that money to be put to other purposes, so you can do other things with those resources.

Next: Tackling the concerns of cloud computing

Page Break

There are always a group of reasons given against moving towards the cloud — security; the portability of data back out of cloud services again; where the data will actually be hosted physically. How have you been able to move through these various concerns?

You don’t move through all of them at once. Way back when outsourcing was in vogue, people were worried about security. When offshoring became popular, security was a problem. When virtualisation came along, security was a problem. All of those moves have been either human arbitrage, in terms of staff productivity, or they’ve been machine arbitrage. Virtualisation is a machine arbitrage and cloud computing is about machine arbitrage. It’s basically getting better utilisation from servers that are operating at different capacity. Whether inside or outside the organisation, you are going to get a higher level of utilisation and productivity and security is definitely a concern. But wherever there is a large arbitrage to be had, people will decide whether or not they are going to have it. Then, if they need further security compliance, they can work with regulators and the risk community and figure out what more they need to build back to ensure that robust security. But you will still have that machine arbitrage and you will still go after it, even when you’ve built back the cost of that security.

When did you first start on this vision? It sounds like something that you were probably thinking about even before the technology was available to do it?

We went to Google in May of 2007 and we could see that they were doing things like messaging an email in the cloud and it freed up so much resource. And we thought, ‘Wow, wouldn’t it be nice if you could do other enterprise-scale activities on public infrastructure, and you could partition that and secure that’.

Now there are a lot of providers that are coming into the marketplace. And if we can get those new providers to create a credible threat to the incumbent, we have a real chance of breaking the lock and creating true contestability.

We were working with EDS at the time to renegotiate contracts and there just wasn’t the business motivation from the suppliers’ point of view to make the switch. Then we did a large network outsourcing with Telstra and I tried to get those guys to think about provisioning intelligent converge networks on a more cloud-oriented basis and they wouldn’t do it, because they didn’t have to! The incumbent service providers, whether it’s IBM or EDS, were really struggling with the model because they tend towards their own accounting standards. They’ve still got their own strong business models. They still wanted to continue to ‘lock’. They do resist contestability. And those are the antithesis of what we were trying to do. We tried to get virtualisation to occur for a couple of years, EDS and VMware, and with EMC and NetApp. They’d come someway along the path, but they still managed to desist. That’s because there was not a lot of competition.

Now there are a lot of providers that are coming into the marketplace. And if we can get those new providers to create a credible threat to the incumbent, we have a real chance of breaking the lock and creating true contestability.

If you go back to the standard that I mentioned for the virtual machine, it’s not that different from the standard that created standardisation for virtualisation. So the X86 midrange machines allowed standardisation for virtualisation. If you had 4000 boxes all running at 5-10 per cent utilisation, you could quickly halve the number of boxes and get that utilisation up to 80 per cent and you could free up money to provide for better security and better autonomics. If you had peak loads you could shift those loads. Now, if you get those same X86 boxes in, you run them on Linux [and] you can then mobilise. That means you can shift from one provider to another. So if they were running on HP or Dell or IBM or some other service provider (if they are not inside the corporation) you can start to shift work. And in the shifting of the work (the mobilisation of work) you can create contestability.

That’s where the competition comes in and that’s where we get much cheaper utility computing. We’re not doing it to skin the providers, we are actually saying to them, ‘We don’t want to spend as much money as we have been on this utility or commodity style compute’. We want to spend much less — up to half of what we’ve been spending — and all that money that we free up can be spent on getting better dynamic and rich application services, and more granular data services, which add far more value to the interactions that we have with customers. We are trying to shift our money away from the backend utility and up closer to the interactivity with customers.

It makes more sense to spend more money on interacting with customers than it does to spend money on running an IT system.

Customers now want ‘anytime, anywhere’, real-time convenience and real-time value. They don’t want to wait for their value. They don’t want to wait for a batch process. So we’ve gone into modernising the core systems and giving them real-time accessibility, real-time convenience and much richer services.

They actually want to consume these services. It’s not like ‘Internet 1.0’, where it was static pages. Now they have dynamic content that is streamed; they have interactions all the time through their social networks. They want to see confirmation of interactions and transactions in real time. You have to have a high level of granularity and a high level of dynamic content in order to serve their needs. They don’t care that you’ve spent half your money on the backend. They expect security. They expect dynamic content. They want really high class, highly-accurate, highly-available information, so that they can do their banking while they’re in a taxicab or in an airport lounge or they’re at home and they are rushing between different jobs. They do not want to have to wait in a queue on the phone and they do not want to have to wait for a confirmation the next day. They need it now! We’ve got to free up the systems and move out of that clunky infrastructure into far more dynamic front-end content and capability.

If you were to look into your crystal ball, maybe five years into the future, how do you think the IT environment at Commonwealth Bank would differ to how it looks today?

We’ll have more people focused on interpreting information and making offers in real time and pricing those offers based on a customers risk profile and the customers loyalty. We’d be able to offer those new granular products and services to each and every customer as and when they need it, rather than doing long dated product cycles and sales cycles and development cycles that we currently are committed to.

In terms of the IT environment, what percentage of the bank’s compute activity might end up residing within the cloud environment?

I don’t need it to be in the cloud per se. When we go back to the fundamentals of the cloud, we are talking about services that are available increasingly on a unit price basis. They are only consumed as and when they are needed, and they are subscribed to across a network. We already see that our retail customers are doing that. So they are demanding more that we supply that. And we are trying to say to the suppliers ‘free up the model so that we get out of utility and into value’. So we are trying to make a shift away from spending half of all of our budget on maintaining lights-on infrastructure, and instead get more of that money into creating really high value, highly responsive services (whether they are data services or application services), and reinventing them for customers as and when they need them; rather than spending all of that money on the back end.

Does that mean we should be thinking about this almost as a philosophy or mindset, rather than as a technology change?

1 2 Page 1
Page 1 of 2
7 secrets of successful remote IT teams