In the first blog in this series, we looked at why it\u2019s so important to look at an application\u2019s workload patterns when sizing for any public cloud, including AWS, Microsoft Azure, Google Cloud Platform, and others. In this post, we\u2019ll look at another common pitfall that we call the cloud instance \u201cbump-up loop.\u201d\nHere\u2019s how it works. You use a reporting tool that says your application workload is running at nearly 100% utilization for 3 hours overnight (see figure below).\u00a0\n Densify\nThe tool is designed to interpret high utilization as bad, and concludes that the workload is under-provisioned. So, it recommends bumping up the CPU resources, and in doing so increases your costs. But a funny thing happens: despite the change, the next day the workload still runs at 100%, just for a shorter period of time. Once again, tool says to throw more resources at it. The next day you see the workload still runs at 100%, but again for a shorter period of time. And so on. Now you\u2019re stuck in the endless capacity bump-up loop.\nWhy? Because some applications will take as much resource as you give them. It\u2019s kind of like how goldfish will just keep eating and never be sated. In public cloud, your costs keep escalating for little-to-no payoff.\nTo avoid this bump up loop, you need to understand what the workload is doing and how it behaves. In the example above, this batch job completed its work in 3 hours, which was completely satisfactory despite CPU being pegged at 100%. By giving it more processing resources, all you\u2019ve done is increase your costs. Bigger gold fish with no business benefit.\nPeople fall into the bump-up loop because they\u2019re using simple tools that aren\u2019t smart enough to recognize what\u2019s really happening. It\u2019s not enough just to take a high-level view of the workload usage. You also need to understand each individual workload pattern on a granular level. Some workloads running at 100% may need a larger instance to meet its requirements, while others \u2014 such as the example above \u2014 do not. Limited tools just can\u2019t give you that insight.\nIf all you do is look at peaks or averages, and not the actual workload patterns, you\u2019re bound to make decisions that end up costing you more \u2014 such as paying for a \u201clarge\u201d AWS instance when all you really need is a \u201cmedium.\u201d On the other hand, if you use tools that give you a deeper understanding of workload patterns and requirements, you can create policies to control allocations based on behavior, which will guide instance sizing to optimize your cloud resource spend and avoid the bump-up loop.\nIn the next blog post, we\u2019ll look at another tricky issue: how to size memory for cloud instances. As with analyzing CPU use, analyzing memory use is not as simple as you might think.\nDensify is a predictive analytics service used by leading organizations and service providers to reduce cost and performance risk for public cloud and on-premise, virtual infrastructure in real time. To learn more visit, www.densify.com.