Credit: 4X-image In our last blog post in this series, we discussed how to overcome the “bump-up loop” in which you continually increase CPU resource allocations in public cloud instances, even when it isn’t needed. In this blog post, we’ll look at another cloud resource management challenge: memory. Sizing memory appropriately is extremely important because memory is a major driver of cost, and tends to be what we see as the most commonly constrained resource in virtual and cloud environments. Optimizing memory size often yields significant cost savings, whether your application is running in the cloud or on-premises. As with CPU capacity, sizing memory resources can lead you down the slippery slope of the bump-up loop. One reason for this is that people often focus on the wrong stats. That’s because the data provided by the cloud providers — and the limited analysis tools many people rely on — can be misleading. Let’s look at why that is. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe Take the example of a Linux application running on a physical server to which you allocate 16GB of memory. The application itself might only be using 5GB, but Linux will use the rest of the available memory for I/O “buffer caches” to speed things up. So when you go to measure usage, it will appear that your application is consuming 100% of available memory all the time (see figure below). Densify What this simplified view isn’t showing you is whether there is actual pressure on the memory. The operating system is consuming all the available memory, but this is not apparent “from the outside.” So you allocate more memory, which the OS gobbles up again. And the costly cycle continues. Because of this, it’s not enough just to look at how much memory being used; you need to analyze how it is being used. This means looking at whether the memory is being actively used by the systems, which is often referred to as active memory and resident set size. This is the actual working set being used by the application or operating system, and this is where we want to look to see if there is any pressure or waste. Even if memory usage is sitting at 100%, the active memory use might look something like this: Densify Having this kind of visibility is critical for making decisions about memory sizing that meet your application requirements in the most cost-efficient way. Most cloud services provide some sort of tools for analyzing memory use, but they have limitations — and can incur extra costs. Many monitoring stats provided by cloud providers do not include memory stats by default. For example Amazon Web Services (AWS) CloudWatch monitoring service charges extra for access to memory data. And even when companies pay for this additional data set, they may not actually be able to use the data without analytics that know what to do with it to appropriately recommend memory sizing. To avoid costly mistakes in sizing memory, a one-dimensional analysis is not enough. Best practices dictate that the actual “required memory” for a cloud instance must be a function of both consumed memory and active memory, and policy must be used in this calculation to ensure that enough extra memory is earmarked to the operating system to allow it to do a reasonable amount of caching, without becoming bloated. This provides the optimal balance of cost efficiency and application performance. When the analytics are used on an ongoing basis, changes can be made in incremental steps to minimize risk and disruption. In the next blog post, we’ll discuss some tips for modernizing cloud instances as a way to generate more savings on top of right-sizing allocations. Densify is a predictive analytics service used by leading organizations and service providers to reduce cost and performance risk for public cloud and on-premise, virtual infrastructure in real time. To learn more visit, www.densify.com. Related content brandpost Relational Database Services – Not Quite as Simple as They Seem Moving database operations to the cloud can deliver many benefits, but database services may be opaque, and can vary in size and type by region. By Dwight Davis Jan 31, 2018 3 mins Cloud Computing brandpost Standard or Custom Cloud Instances? How to Decide? Building customized cloud environments for your applications may make sense -- but only if you can easily weigh the benefits versus the costs. By Dwight Davis Jan 29, 2018 3 mins Cloud Computing brandpost Why Saving 30% on Your Cloud Deployments May Be a Bad Deal Attempting to cut cloud costs only by analyzing services bills can leave significant savings on the table. By Dwight Davis Jan 24, 2018 3 mins Cloud Computing brandpost Don’t Have Reservations About Your Reserved Instances Although reserving capacity intelligently can dramatically cut costs, locking in the wrong type or number of cloud instances can prove to be a confining money pit.rn By Dwight Davis Jan 23, 2018 2 mins Cloud Computing Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe