Sizing for Memory-Bound Workloads

In this blog, we look at sizing for memory-bound workloads in the VMware Cloud on AWS sizer tool. Learn how your assumptions affect the sizing calculation and what steps you can take to get more accurate results.

 

In a previous post, we discussed the overall sizing process for VMware Cloud on AWS. In this blog, we assume that your workloads are memory bound. We will look at how the sizer tool takes memory into account and discuss what you can do to improve the accuracy of your memory-bound sizing estimate.

 

Memory-Bound Host Calculation

To figure out how assumptions impact the VMware Cloud on AWS sizer estimate, we need to first understand the math behind the sizer tool itself. The calculation for memory-bound hosts is the following:

Memory Bound Hosts= [(RAM per VM)*# of VMs*Resource Utilization Plan]RAM per Instance

Resource Utilization Plan=Memory UtilizationTarget RAM Ratio

The inputs you provide are: # of VMs, RAM per VM, and Resource Utilization Plan. The sizer tool supplies RAM per Instances and outputs a number for Memory-Bound Hosts, which is the output value that we are after.

Your first two input parameters are straightforward. But the Resource Utilization Plan requires some explanation and is where your assumptions play a large role in the calculation.

The Resource Utilization Plan is a ratio between the average utilization of workloads at an individual guest VM level, and the memory oversubscription at a system level. Memory Utilization is the average memory utilization of workloads: the higher the utilization, the more resources you will use as claimed by the workloads when they were created. This is a per-workload memory utilization value. Target RAM Ratio is a system-wide value. A higher Target RAM Ratio drives the sizer host number output down because you are willing to oversubscribe the available RAM. This is a reasonable assumption when you don’t expect all VM’s in your environment to be actively using their allocated memory at the same time. However, memory claimed by the hypervisor usually isn’t released as aggressively as CPU resources, as long as the workloads are active. Therefore, it is advisable not to set Target RAM Ratio too high.

 

Example Calculation

To clarify the equations above, let’s look at an example. Suppose you collect the following data from your on-premises environment:

  • 2,000 VMs
  • 8 GB RAM per VM
  • 80% average memory utilization per VM
  • 30% memory oversubscription in your on-premises environment

Given the on-premises information above, we can calculate the number of i3 hosts required. Remember, you do not need to specify the value for RAM per Instance; that value is supplied by the sizer tool based on the instance type selected.

Memory Bound Hosts= [(8GB)*2,000*0.80/1.30]512GB

Memory Bound Hosts=20

Given the on-premises data in this example, the sizer tool estimates that 20 i3 hosts are required.

Note:

  • The RAM per Instance is actually less than 512GB (in an i3 instance) because some of the memory is taken by management VMs, vSAN memory overhead and potentially other things that may be taken into consideration in the future. This doesn’t affect the end result in this case, so we used 512GB in the equation for simplicity.
  • The estimated number of hosts is rounded up to the nearest integer since it is not possible to deploy partial hosts.

Practical Considerations for Memory-Bound Calculations

Let’s now turn to a practical example for how to do sizing when workloads are memory bound. A previous article described the overall process; here we will only focus on a few critical aspects. 

On-Premises Data

The on-premises memory data you need is:

 

  • Total number of VMs
  • RAM per VM
  • Memory Utilization per VM
  • Target RAM Ratio (i.e. RAM overcommit)

 

 

In our previous example, we had 2,000 VMs with 8GB of RAM, Memory Utilization = 80% and Target RAM Ratio = 1.3. Based on these values, the estimated number of hosts was 20.

 

Upper Bound

 

An upper bound provides a sense of how far off our estimate may be. To get an upper bound, we set the Memory Utilization and Target RAM Ratio to very conservative values. Of course, if you currently do not oversubscribe memory on-premises, then your sizing from the previous step is already conservative; essentially, you already have an upper bound and you can skip this part.

Assuming you do plan to oversubscribe memory, let’s move forward with our upper bound calculation and consider our example of 2,000 VMs, with each VM allocated an average of 8GB. There is not much wiggle room in these settings. We know how many VMs we want to deploy in VMware Cloud on AWS, and we have their average memory needs. On the other hand, there is a lot of wiggle room when we consider memory oversubscription. If you oversubscribe too much, you can run into performance issues due to things like memory ballooning or swapping to disk. If you don’t oversubscribe enough, you could end up overprovisioning and paying for too much capacity.

No matter which end of the memory oversubscription spectrum you are in, it is prudent to establish an upper bound for your workload memory needs. The way to do this is to assume no oversubscription and maximum memory utilization:

  • Memory Utilization = 100%
  • Target RAM Ratio = 1

If you wanted to go even further, it is possible to set Target RAM Ratio to a value below 1 to undersubscribe memory. This will lead to an even higher upper bound. You might do this on-premises, for example, to have extra memory capacity in case of a host failure. There is little reason to do this in VMware Cloud on AWS where replacing a host is done for you by VMware in minutes. You may still choose to undersubscribe memory for some other reason, but in most cases a value of Target RAM Ratio = 1 is a good value to establish an upper bound.

We now calculate the upper bound for our previous example (2,000 VMs with 8GB of RAM per VM):

Memory Bound Hosts= [(RAM per VM)*# of VMs*Resource Utilization Plan]RAM per Instance

Memory Bound Hosts= [(8GB)*2,000*1/1]512GB

Memory Bound Hosts=32

The upper bound for this example is 32 hosts. Remember that with Memory Utilization = 80% and Target RAM Ratio = 1.3, the estimated number of hosts was 20. With no memory oversubscription, we require 60% more hosts! To narrow this estimate to a more reasonable range takes experimentation and tradeoff considerations.

What if you are not memory bound?

 

Let’s suppose that you run the calculation above with no memory oversubscription and find that the sizing is not memory bound as shown below:

If this is the case, it would be a good idea for you to focus your energy on the actual limiting factor, which in this case is storage. Sizing is constrained by storage, memory or CPU. If your upper-bound calculation shows that the sizing is not memory bound, then you will gain little if anything by continuing to fine tune your memory oversubscription assumptions.

 

Test sample workloads and consider tradeoffs

Assuming that your workloads are memory bound, you may find that your sizing estimate based on your on-premises data is significantly different from the upper-bound calculation. To tighten up your estimate, you can run some sample workloads in a test VMware Cloud on AWS SDDC to observe actual memory behavior. This will give you confidence in your assumptions and estimates and also avoid potential surprises when you go to production. 

As you come up with your sizing estimate, consider also if you prefer to overestimate or underestimate. Either can be a viable strategy and it depends on your particular situation and how much you can really take advantage of the elasticity of the cloud with VMC on AWS.

Conclusion

In this blog, we looked at the details of host memory calculations in the VMware Cloud on AWS Sizer. We also discussed how your assumptions affect the sizing calculation and what steps you can take to get more confidence that you have sized your deployment accurately.

About the Authors

Dan Florea

Product Line Manager at VMware

Dan Florea is a Product Line Manager at VMware, focused on the VMware Cloud on AWS product. Dan has a deep technical background, spanning across virtualization, networking, and storage. Among other product areas, he is responsible for the Stretched Clusters and Elastic DRS features and is passionate about helping customers have the best possible experience on VMware Cloud on AWS.

Leave a Reply

Your email address will not be published. Required fields are marked *