by Byron Connolly

UNSW scientists using AI to create elastic cloud

News
Apr 08, 20143 mins
Cloud Computing

Researchers at the University of New South Wales (UNSW) are using artificial intelligence to build a computer network they claim can regulate its own consumption of public cloud services.

A research team has built a software controller that it says could potentially be used by every virtual server instance in the cloud to monitor the performance of server applications.

The controller uses a simplified version of reinforcement learning – an artificial intelligence method that is more commonly associated with robotics than IT.

Under the proposal model, if an application performance becomes critical – due to a sudden increase in demand – the controller will communicate with others on the network and automatically determine how and where to source extra capacity to cope with the load.

The controllers figure out which one has high load and which one has much less load and how to balance that out, said Srikumar Venugopal, a lecturer at UNSW’s School of Computer Science and Engineering and leader of the research team.

Venugopal, who completed a PhD in grid computing, said most applications are not built for ‘elasticity’, a feature of cloud computing that allows administrators to add and remove resources.

This is why external scaling tools exist – such as Amazon’s Elastic Load Balancer – that enables IT staff to manually provision the right amount of resources at the right time, he said.

Administrators set rules to manage when to spin-up new virtual servers or shut them down, using historical data and their own experience to set the rules.

However, the team is hoping that its research will lead to a commercially-available product that makes these decisions automatically.

“Elasticity is already there but there’s a lot of human involvement. It’s an art, you need to know what’s the right threshold and what’s the right set of parameters to use,” said Venugopal.

“You have to set a threshold for each type [of application] and it gets very complicated. As an analyst if you are looking at an organisation’s infrastructure and deciding which pieces should be scaled, it can become complicated because of the amount of dependencies.

“We are trying to automate some of this and we eventually hope that a few years down the line we will have an environment that people can use.”

Venugopal said UNSW has already created a software infrastructure that does decentralised scaling but it needs to be tested with actual enterprise applications.

“Experimentally, we know it does well but we want to get a real world experience and fine-tune it … then we can make much better claims,” he said. “That’s the current focus, we want to make sure we can translate this into actual enterprise environments.”

The team still needs to overcome a few configuration management challenges, one being how to stop one incorrectly configured virtual machine from communicating the wrong information to others. This causes incorrect decisions to be made.

The team has also looked at service level agreements (SLAs) that many organisations have internally or with external providers to derive parameters for scaling from these SLAs, said Venugopal.

“That’s ongoing research because they [SLAs] are [sometimes] not directly translatable to thresholds. In many cases, they have conditions that are buried somewhere and that’s not really clear.”

Follow Byron Connolly on Twitter: @ByronConnolly

Follow CIO Australia on Twitter and Like us on Facebook… Twitter: @CIO_Australia, Facebook: CIO Australia, or take part in the CIO conversation on LinkedIn: CIO Australia