by CIO staff

Making sense of the hybrid cloud minefield

News
Mar 01, 2020
Cloud Computing

How tech chiefs are managing complex cloud infrastructure, dealing with cyber security challenges, and deciding when to bring services back in-house.

cio lenovo mel 054
Credit: David Thomson Photography

It’s clear that the uptake of cloud services – largely hybrid infrastructure – is driving growth across many Australian organisations. Cloud adoption in the country has resulted in a cumulative productivity benefit to the economy of $9.4 billion over the last five years.

Organisations are typically running hybrid application and infrastructure environments with a mix of services in their own on-premise data centres, in private clouds and out in the public cloud.

But what are the key considerations for running a hybrid environment?

Senior technology executives got together in Melbourne recently to discuss their current cloud computing environments and the strategic challenges they are facing. The lunch event was sponsored by Lenovo.

Most customers face issues around the costs of cloud computing, says Joao Almeida, chief technology officer at Lenovo Data Center Group.

“While cloud solutions are often seen as a cheaper alternative to on-premise infrastructure, the reality is that they are more expensive than on-premise solutions, with added challenges around data sovereignty,” he claims.

Almeida says that moving workloads that were originally designed for on-premise solutions to a cloud environment, without prior reconfiguration, will not result in a good outcome.

“We strongly advise customers be be more selective about which workloads they intend to move to the cloud, and to ensure that they re-architect these workloads for the cloud while keeping and optimising the workloads that will remain on-premise. This approach ensures that customers end up with a balanced hybrid cloud environment,” he says.

David Benjamin, chief information officer at Millennium Services Group, says that the facilities services company faces several common issues when undertaking a cloud transformation.

The biggest challenge, he says, is to connect legacy systems with newer cloud-based business applications.

“For our short to medium-term plans, solutions that can operate in a hybrid cloud environment are essential along with services from vendors that can support a company’s migration to the cloud,” he says.

Security is also an issue and while public cloud take responsibility for their clouds’ security, they are not responsible for the company’s applications, servers and data security.

“Millennium is still required to encrypt and secure our data. We also invest in a suite of tools such as malware, anti-virus and secure web gateways from various cloud service providers to protect our data from cyber threats,” he says.

Rohan Penman, global head of technology at T2 Tea, says trust is most important when choosing a tier 1 cloud provider.

“You have to have faith that security is their number one priority. Common sense dictates that security is the key when it comes to storing your applications and data, but as we have seen, sadly you can get tied up in a mistake,” Penman says.

“The only mitigation is to stick with the big names and ensure best practice is followed during the configuration of any resources. An engineer’s error can easily allow access to things that you are trying to keep secure so security is everyone’s responsibility. Just look at the horror stories around [Amazon] ‘S3 buckets’ being left public,” he says.

Sally Wang from ANZ echoes these comments, saying that the biggest challenge for the bank when undertaking a cloud transformation is providing the right level of information security.

“One way of solving this is to add another layer of masking to hide the identity of data. But this can make it difficult to use and link the data,” she says.

Data governance is also an issue in the banking sector as all banks need to assess risk based on APRA’s prudential standards.

“There’s a significant migration risk for banks that have long histories – some more than 100 years. It’s a challenge moving existing applications to new cloud environments smoothly,” she says.

IT architecture laggards

Aaron Gillett, digital leader at engineering and construction company, KBR, says the IT architectures in use within the engineering domain are still lagging behind other more progressive industries.

“Hence we have an opportunity to skip straight to the public/private cloud from on our premises. We also have culture and diversity challenges to deal with as well so we have a lot on our plate while we undertake our pivot to an agile-based organisation,” he says.

Gillett says KBR works on some of the most technically-complex engineering projects in the world with organisations such as NASA, the Australian Department of Defence and the United States Department of Defense.

“We have been able to extract every ounce of value out of the legacy systems that serve these activities, However, it is becoming more and more critical to look at updating our core systems towards a more scalable, collaborative architecture to encourage innovation, knowledge capture, scaling and rule-based engineering. This is essentially the crux of our strategy,” he says.

Set and don’t forget

Using cloud services typically take the pressure off technology teams by automating configuration, patching, provisioning, migration and capacity planning tasks.

“Cloud migration is not a ‘set-and-forget’ exercise,” says Lenovo’s Almeida.

“While CIOs should not be primarily concerned with the likes of patching and configuration – many of these tasks will still be carried out in a cloud environment and time will need to be spent managing cloud resources.

“Unless a CIO is running a fully-managed software-as-a-service solution, the only real management talk that is eliminated in a cloud environment is the need to manage hardware lifecycles,” he says.

Johan Boeve, head of IT infrastructure and operations at Winc A/NZ, adds that when it comes to automation, his organisation tends to go for the “repetitive, low hanging fruit” that can be monitored and governed natively in the cloud or by using an off-the-shelf toolkit.

“Patching, provisioning and scaling can all be done with a set number of parameters. It’s easier in the cloud as it was built to be automated at scale,” he says.

T2’s Penman adds that public cloud costs are almost always consumption-based and because these services are elastic in nature, environment scaling can be powered by an automated process, for example, shutting down development environments during off-peak hours.

“Applying automation ensures we’re consuming only what’s required and not overspending on unused capacity,” he says.

Hybrid cloud infrastructure also provides additional stability and resiliency through high-availability disaster recovery, he adds.

“As our hybrid cloud functions in multiple locations, deploying a standby copy of critical business systems acts as a safety net in the event of a disaster. During a crisis, even well trained staff struggle to think clearly so having an automated process to perform critical tasks – the process of failing over and/or recovering from an outage – has reduced our recovery time and improved overall stability,” he says.

When cloud doesn’t deliver

Lenovo’s Almeida says the organisation is seeing a trend in customers bringing services back from the cloud to their own premises. There are several reasons including cost, performance, fit for purpose and data sovereignty.

“It’s important that customers making such decisions speak to their trusted infrastructure provider to ensure they find the right, balanced solutions. In many cases, customers can still maintain the benefits of the cloud in an on-premise environment,” he says.

Bernard Ching, group IT manager at ANCA Group, says the manufacturer has pulled cloud services back in-house due to cost pressures.

“We have had archiving in Microsoft Azure and the cost kept creeping,” he says. “There was a lack of understanding of storage and trailing costs with the cloud. There is a place for archiving to the cloud for some data but there’s also the ‘old fashioned tape drive,’” he says.

T2’s Penman adds that the organisation has removed ‘test and train’, and where it’s safe to do so, user acceptance testing capabilities, out of the cloud and back in-house.

“It’s always a matter of cost – systems that don’t require around-the-clock monitoring or access just take up too much operational expenditure to warrant the expense,” he says.

“There is a level of scripting and automation that can be put in place to put these resources to sleep but they do still take up quite a lot of storage and resources when being used. This is thousands of dollars per month.”