When times are good, the harvest is full, the granary bins are overflowing, and it’s easy for an enterprise to gorge freely. When times turn hard and revenue evaporates, well, it’s time to cut back by slicing out those wild expenditures and bold ideas that once made so much sense. It’s not an easy or pleasant task, but if it’s done carefully, the result can be a nimble and efficient organization ready to sail on into the future.
Here are 11 not-so-obvious places to look for fat to cut from IT budgets before the CFO comes calling.
Dump the gimmicks
Does your website include extra data just to make it a bit more useful? Some sites like to roll in stock market quotes, weather forecasts or sports scores to make the experience a bit richer. Others include teasers like, “This hotel has been booked 18 times in the last 47 minutes.”
Clever bits of data and flashy displays are expected in good times — and can sometimes even increase revenue a smidge. But in lean times, they’re an easy target to save money, especially as these “enhancements” are usually supported by a separate microservice running in its own pod. There’s often a frequent background call to an information source or API that charges a subscription. These extra features may make your website more sophisticated, but if the extra fields are just gloss or fun, well, the cost of the data feed, the extra server time and the software maintenance overhead are easy to cut.
Change architectural priorities
Development teams try to hit the targets provided for them. In fat times, many managers focus on metrics that emphasize speed, such as response time. Saving those milliseconds often means adding extra tiers of servers and building out elaborate networks that are closer to the users. These are noble goals because there’s plenty of research that shows that fickle customers respond well to speed.
But when times are tough and every penny matters, customers can make do with less. The price sensitive can wait a few milliseconds more for a deal.
If the priority is switched from speed to efficiency, many of these extra layers of caching and synchronization can melt away. Instead of measuring raw reaction time, look at the amount of computation it takes to satisfy a request. Sometimes slowing things down by just 10 percent or 20 percent can save more than half of the computational effort. Saving money on the extra resources also means saving the labor of keeping all of those layers running.
Audit infrastructure allocations
When good developers are careful, they often create cloud instances with more memory and virtual CPUs — just in case there will be a burst of demand. Sometimes it’s not even the developers. Someone will ratchet up the machines because of a burst of users. In good times, this prescient behavior of adding a bit of extra power is to be rewarded with a bonus.
In tight times, though, this fat should be reclaimed with care. Dialing back the CPUs is usually a bit easier because the layers that allocate the cores are largely automatic. If there’s no extra CPU core available, the software just waits another nanosecond until one is clear.
Dialing back memory is a bit more dangerous because it’s more common for software to crash or fail when it can’t find more memory. If your code fails gracefully, you can watch the log files as you reduce the RAM.
Sometimes the culprit is fast, local storage. When the bill for some instances arrived, the disk space cost more than the CPU or the RAM. Much of this disk space was empty but someone had built out the machine image to be twice as big — just in case. The cloud makes it easy to increase this extra disk space; it can be a bit onerous to put them on a diet. This how-to guide has 23 steps! Kicking an addiction only takes 12 steps.
Rethink disaster preparations
It may seem odd to use a huge societal disaster as an excuse to ease up on disaster preparations, but all of us should have a good idea of what is really important now. Building a robust, fail-safe database for collecting orders for mission-critical healthcare materials is more essential than ever. But extending the same belts-and-suspenders principles to a bunch of social media posts is not. Some databases don’t need to be replicated around the world every few milliseconds. Some keystrokes don’t need to be tracked. Some databases don’t even need transaction consistency. Many bits of data don’t even need much care at all. A set of log files for some transactions will do nicely for bits that are referenced occasionally if at all.
Switch to serverless
Over the past few years, a new option for lightly used resources has reached critical mass. “Serverless,” which still involves server hidden under layers, makes computing easier because the infrastructure takes care of starting up and shutting down virtual servers when the occasional request appears.
The prices are often mind-bogglingly small as cloud vendors charge the tiniest fractions of a cent for responding to some web request. If you’ve got some databases or websites that only welcome a few dozen people a month, your bill could be less than a penny and it might round down to zero.
Search out the servers with a very low load and look to replace them. They might be experimental tools or ones for business niches. Be careful with those that might go viral or encounter a usage spike. The fractions of a penny add up quickly when too many show up.
Consider low-rent options
In the old days, the IT department solved problems by building out a proprietary database curated by a proprietary front-end. Custom software was the name of the game. Now it’s easier than ever to just dump data in a cloud-managed spreadsheet. Microsoft’s Excel, for instance, has an API that takes JSON packets and so does Google’s Sheets. If you’re already paying for a corporate subscription for desktop tools, why not just push basic data directly into spreadsheets?
It’s a good plan that also empowers other team members who can work with spreadsheets but can’t handle SQL. But there are limits. Google Sheets, for instance, can only handle 400,000 cells. When the data gets large, downloading everything to work in a web browser can be a slog. But for small jobs, relying on basic infrastructure can be a fast way to deliver a solution.
Repatriate low-use services to old repurposed hardware
The cloud machines are easy to provision and simple to use, but they get expensive if you use them constantly. The easiest places to save money are secondary and tertiary databases that need to be available all of the time but are not mission-critical. These can be moved back to on-premises machines, often using repurposed old hardware running some open source operating system. A few backroom machines with fat disks can also be effective archives for log files and soon-to-be-forgotten database entries. Don’t sell off some old machines at surplus prices. Save money on cloud storage by moving the data back into the server closet.
Software upgrades can be a challenge. Some may fix security lapses and stability problems. They should be installed quickly. Other upgrades may include a collection of new features and capabilities that may be welcomed warmly when money is flowing freely but now should be looked at with a careful eye toward the hidden expense. New features often mean more code and more code usually wants more RAM and more CPU power. Even when the upgrades are already included in the cost of the license — and they often cost more — paying for the greater computational resources can be an unnecessary expense. Everyone’s already been operating without the extra features. Is an upgrade merely desirable? Or is it truly necessary and worth the extra cost?
Downgrade service gracefully
One of the most hidden collections of fat lies in the resolution of many images and videos. Switching to lower resolutions was one of the first thing that some of the major video streaming services did after the COVID-19 lockdown. Luxurious 4k video is wonderful, but most of the time people will do fine with much lower grades. Downgrading the number of pixels and using more extreme compression means fewer servers to deliver the data and lower bandwidth bills for its transport.
Revisit open source
Proprietary code has long earned its place in the market by offering superior features that justify the price. If your stack already includes some expensive code, it’s because that code delivers something important.
Ripping out running proprietary code to save on licensing costs may not make sense for the mission-critical core, but there are plenty of secondary and tertiary tools that are fair game. Perfectly adequate open source options for internal systems can save fees. The move may not be popular with the internal teams that enjoyed the extra features but they’ll thank you if a thinner license budget forestalls layoffs.
Stop feeding the bellies
As Dave Barry might say, “I’m not making this up.” One Silicon Valley dotcom company invited me over for their nightly dinner in their office space. The food prepared by the staff chef was good, but after dinner, just a few feet away on the counter in the dining room were at least twenty bottles of top-shelf, expensive liquors along with some red wine from Napa Valley’s best wineries. Some of the bottles cost $100-plus in so-called discount liquor stores. There were too many too count. Did I want any?
Does your tech budget also include a line with euphemistic labels like “morale boosting” or “overtime services”? The liquor on this shelf cost more than a year’s worth of server time for a big project. Buying alcohol and other treats has been a valid strategy for many organizations. It’s hard to judge what this management team did in the past because the company has survived and grown since my dinner. But times have changed. At the very least, cut out the scotch aged for more than 10 years and the small batch anything. Switch from Napa to Sonoma wines or, better yet, consider a box wine. And get rid of the doughnuts. This will save both the budgetary fat — and the literal insulin resistant fat that gathers around our bellies.