On-prem serverless – missing the point?

Serverless in your datacenter isn’t really “serverless,” is it?

Serverless and containers are two of the fastest growing trends in IT. With several options available, including both public and private cloud options, which is the best for your organization? Are containers the future, or just a bridge leading toward serverless? As a business leader, it’s important to to consider the business value of each option. To do this, let’s look at the history of distributed server architecture and how different trends have affected the operational requirements of IT, and the benefits available from serverless.

The evolution from “serverful” to “serverless”

As someone who started their career as a server administrator in the late 90s, I can tell you that being a server admin is not a glamorous job. Back when I first started, we would spend weeks procuring a new server, hours unboxing, configuring, and racking, and days installing and configuring software. Each server required specific drivers and moving installations between two different types of servers was a tedious process to say the least. Each new application, and every bit of infrastructure it required, required an extensive amount of duplicated operational effort. Then came the savior, virtualization.

Virtualization – the new way

The hypervisor revolutionized the world of server administration. Aside from the financial benefit of being able to run multiple virtual servers on one physical server, increasing the resource efficiency, there were huge operational benefits around the provisioning and running of servers. Since the hypervisor provided the same virtualized “hardware” at the OS layer, all of the drivers within the OS were the same from machine to machine, meaning you could move the virtual servers between different hardware in a matter of seconds, sometimes even while the machine was still running. Templates provided the ability to provision new servers in hours instead of days, and snapshots allowed you to revert the state of the machine in minutes if something went wrong. The amount of duplicated effort and delivery times were greatly decreased, but the power of virtualization could be extended even further.

From virtualization comes cloud

The new features of virtualization, combined with some clever automation, allowed for even faster provisioning of servers, and made a better way to inventory and allocate costs possible (although this was not always implemented in practice.) This practice is typically referred to as Infrastructure-as-a-service, or IaaS. IaaS can be delivered either from a public cloud provider or within your datacenter, which is known as private cloud. Effort is spent upfront building server templates that can be deployed quickly and automatically, without a server admin provisioning each and every server build. This naturally promotes greater homogeneity of server builds, as it was extremely common to find differentiation between physical and virtual servers provisioned by different server admins, despite typically utilizing a common server image. Reduced differentiation between servers reduces mean time to resolve incidents, allows for smoother patching and compliance checks. Private cloud still requires implementation and maintenance of the underlying cloud infrastructure, whereas public cloud places this responsibility solely in the hands of the cloud provider, leaving the business responsible for their network, OS and application layers.

Containers – removing the hypervisor

Hypervisors allow for better utilization of hardware resources, but there is still a bit of redundancy. To allow separation between application environments, each virtual machine required its own operating system and runtime environment. This meant that a physical server running a hypervisor with 20 VMs was running 20 operating systems and 20 applications. Containers shift the virtualization layer, allowing you to install multiple applications on top of one operating system, each in its own “container” -- a separate runspace from each application, removing the redundant operating system layer and allowing you to run even more applications on the same physical hardware.

The portability aspect of virtualization not only retained, but enhanced, with containers able to be ported between container hosts cleanly, and the operational overhead on the server hardware is greatly reduced. Fewer OS installs, fewer servers to patch, and fewer OS-level issues to monitor. The management of the containers themselves adds a new layer of complexity, but from the hardware and OS level it greatly reduces the operational overhead for the infrastructure team. The complexity is moved to a new platform, one which is better designed to be handled by orchestrated automation tasks, but operational overhead remains.

Public cloud serverless – forget about the hypervisors and containers

Serverless, less commonly (but more accurately) known as functions-as-a-service or FaaS, abstracts your applications even further away from the underlying infrastructure. As many have pointed out, the name is a misnomer, as there are obviously servers running in the background. The reason it’s called serverless, though, is that you don’t care about the servers. You should never have to see them or think about them, you just execute your code.

Serverless platforms, including Amazon’s Lambda, Microsoft’s Azure Functions, and Google Cloud Functions, allow you to execute predefined types code against their platforms, such as Java or Python. There are no servers to provision or containers to manage with serverless and the data is stored elsewhere, for instance, in a database or cloud storage platform. Serverless allows for a model in which you only pay for the compute cycles used while your code is executed. There is no need to pay for an inactive machine awaiting its next instruction, nor is there a need to worry about adjusting for capacity when demand fluctuates.

Some payment processing companies have begun using serverless, allowing them to run their code when a card transaction takes place, changing the cost of processing the payments to a truly variable per-transaction cost. On days like Black Friday, increased capacity requirements are handled seamlessly, maintaining the same cost per transaction. Likewise, there is value for other, less frequently executed workloads. You may have a server that sits idle for all but a couple of hours per month, awaiting to do its month-end processing. With serverless, a schedule or event triggers this function to run, and you only pay for the compute cycles you use. Operating system upgrades, patching, and application runtime environment upgrades are no longer your remit, those are all handled by the public cloud provider. You just focus on your application, the part that truly is adding value to your business.

Private cloud serverless – where is the benefit?

As you can see, although it may not be suitable for all workloads, public cloud serverless really does eliminate virtually all of the infrastructure concerns behind your applications. You don’t have to worry about uptime, scaling, patching, or anything like that. What about running serverless in your datacenter, though?

There are multiple on-prem serverless options available, often running within containers. The downside is obvious – you’re still managing the servers, the containers, and now the serverless environment on top of that. There is upside in the fact that your developers don’t need to worry about any of that and can focus on their code, but you’re still on the hook for the uptime, performance, and scalability on the other end. It’s sounding very “serverful” to me.

If we use the payment processing use case for example, if you are going to be able to handle the Black Friday transaction onslaught, you’re either going to have to build for peak load (the old school way) or design the system for auto-scaling (the cloud way). Either way you’re having to spend work cycles planning, building, and maintaining the infrastructure.

It’s all about the cost

In all the years I have been in consulting, there have been few organizations that could accurately tell me what their cost of IT really was. Some had a much better idea of the cost than others, often based on standard numbers provided by analysts, but typically there are “black box” elements like shared resources, utilities, and other costs that make it very difficult to quantify the true cost of a particular workload, action, or transaction.

With serverless on the public cloud, this is gone. You will need to quantify the cost of producing your apps, but the cost of running your apps is completely transparent. There are no black box elements, just a report of how often your code ran, runtime, and the cost. You will know your cost per transaction and you won’t have to worry about how that cost will change when you scale. It’s all black and white.

Looking beyond the hard cost figures, the difference in opportunity cost is huge. Opportunity cost, if you are not familiar, is the cost of choosing to do one thing over another. Instead of paying a team to perform all of this infrastructure work, you can focus all of your efforts on building unique applications that add business value, things that differentiate you from your competitors. The struggles of managing the operation vs. innovation costs in your budget shrinks greatly. With on-prem serverless, you miss out on both of these things. In my opinion, that is one of the greatest differentiators between the two.

Aside from opportunity cost of management vs. innovation, there is the added value of regained time. Think about how long it takes to put together the infrastructure architecture for a new system. With serverless, you can get applications up within minutes. The value of removing weeks or months from project timelines for each of your projects is immense. The volume of projects you can complete within a given timeframe with the same headcount will increase greatly, giving you more time to focus on aligning with the business strategy, delivering new capabilities to the business, and less time spent fighting fires and keeping the proverbial lights on.

This article is published as part of the IDG Contributor Network. Want to Join?

SUBSCRIBE! Get the best of CIO delivered to your email inbox.