This article will identify the different network subnets that come into play when planning your SDDC deployment and explain the primary considerations for the selection of new networks to be used within the SDDC, as well as identifying existing networks that will interact with the SDDC. The red numbers in this diagram correspond to the description paragraphs below.
VMware Cloud on AWS Networks
1 – SDDC management CIDR
The Management CIDR is used for all of the internal management components within the SDDC, such as the ESXi hosts (management and vMotion interfaces), vCenter, NSX Manager, and any other fully-managed add-on components deployed into the SDDC, for example the Site Recovery components.
This network must be one of 3 available sizes: /16, /20 or /23. The primary factor in selecting a Management CIDR size is the anticipated scalability requirements of the SDDC. A /23 network can support only 27 ESXi hosts, while a /20 can support up to 251, and a /16 up to 4091. If there is any consideration for scaling beyond 4 hosts, we recommend at least a /20 network. In addition, if Hybrid Cloud Extension (HCX) is a consideration, then a /20 or /16 must be used. As this network cannot be changed after the SDDC has been deployed, it is best to only use a /23 for testing or SDDCs with a specific purpose and lifetime that will not require much growth in capacity. In addition, the Management CIDR cannot contain the network 192.168.1.0/24, as this network is used for a default Compute Network. The networks 10.0.0.0/15 (10.0.0.0->10.1.255.255) and 172.31.0.0/16 are also reserved and the management CIDR cannot overlap any of these ranges.
When connecting the SDDC back to an on-premises environment, this network must be connected with an IPSec VPN from the SDDC’s Management Gateway (MGW) to an on-premises VPN endpoint. The tunnel will be used for communication between the SDDC’s management devices and the on-premises management components (i.e. vCenter, PSC, AD, DNS servers) and optionally for any outbound internet access for the SDDC’s management devices (by default this traffic will be routed out of the SDDC directly). It can also be used by on-premises administrators to access the SDDC vCenter and Site Recovery management consoles.
This subnet should be unique within the entirety of the enterprise network.
If a Direct Connect (DX) is being used, then the defined network will be subdivided, and a subnet of the CIDR will be excluded from the IPSec VPN and instead routed over the Direct Connect for vMotion and VM migration traffic. The remaining subnets within the Management CIDR will continue to connect over the IPSec VPN.
2 – AWS VPC subnet
The SDDC must be connected to an Amazon AWS account when it is created. This account requires a VPC with a subnet in the Availability Zone (AZ) within the AWS Region where the SDDC will reside. Since AWS AZs are randomized from customer to customer, the recommendation is to define a subnet in every AZ within the Region. This way, you will be able to select the desired AZ by its subnet when creating the SDDC. The ability to select the AZ where the SDDC is deployed (VMware Cloud on AWS does not necessarily have facilities in every AZ within an active AWS Region) allows users to keep the SDDC in the same AZ as any existing AWS workload, to minimize data transfer latency and costs, or conversely, to separate it into a different AZ to maximize availability.
The subnets defined in the AWS VPC should also be unique within the enterprise network (including any other networks planned for the SDDC).
The minimum size for these subnets is /27, but for maximum scalability of the SDDC, we recommend using /26 subnets. Once the SDDC has been created using the selected subnet, it is important to not delete or change the subnet. In addition, the VPC subnets that need to communicate with the SDDC compute networks must be associated with the main route table. This includes the subnet the SDDC is connected to, as currently VMware Cloud on AWS only updates the main route table with SDDC routes. These routes should not be modified manually.
The VPC subnet will only be used for communication between workloads running in your SDDC that are communicating with AWS services running within your AWS account (such as EC2 instances, RDS instances or S3 buckets).
3 – SDDC compute networks
SDDC Compute networks are assigned to Logical Switches in the SDDC for workload VMs to connect to. These networks are defined after the SDDC is created through the SDDC vCenter.
Connectivity to these networks is through an IPSec VPN from the SDDC’s Compute Gateway (CGW) to an on-premises VPN endpoint. This can be the same or a different VPN endpoint as used for the MGW.
Currently, dynamic routing is not supported over the IPSec VPN, so the VPN configuration will need to be modified whenever a new Compute network is added that must communicate with the on-premises environment. (This capability is being planned for a future date.)
Alternatively, a compute network can be bridged (L2 extension) to a VLAN on-premises with the NSX Standalone L2VPN endpoint appliance. If NSX is deployed in the on-premises environment, then a compute network can be bridged to an existing on-premises Logical Network with an NSX Edge gateway.
These networks can connect to the Internet using NAT through the CGW when the appropriate FW rules are defined on it. Note that any bandwidth charges incurred by traffic leaving the CGW will be billed back through your VMware Cloud on AWS account.
Each logical switch supports a single network, that must be a /22 or smaller. If the Compute network is not bridged via L2VPN, then the default gateway will be defined upon creation of the logical network.
There is a limit of about 500 routed SDDC compute networks created, and 25 bridged via L2VPN. Connectivity between routed networks within the SDDC is routed via the SDDC’s DLR so traffic stays within the SDDC, but for extended/bridged networks the default gateway must be on the on-premises side, so traffic will need to go to the on-premises network to be routed. You will need to use either a physical router or NSX Edge as the gateway, as a DLR can conflict with the SDDC’s DLR.
4 – On-premises networks
The most straightforward way to connect the SDDC with the on-premises networks is to define the entirety of the enterprise network’s subnets as remote networks in the SDDC CGW and MGW IPSec VPN connections. This will allow communication between the SDDC and any node on the enterprise network as long as the SDDC’s network subnets are routed to the on-premises VPN Endpoint. In this case, the SDDC networks must be unique, and preferably distinct (i.e. not overlapping) within the enterprise.
Traffic can still be controlled through the Firewall on the MGW and CGW in the SDDC, as well as with the on-premises firewall(s) on the VPN endpoint.
However, depending on the required connectivity, it is also possible to use only specific on-premises networks in the SDDC CGW & MGW IPSec VPN remote networks. This limits connectivity between the SDDC to only the selected on-premises networks. It can be used to improve security, and to reduce complexity if the on-premises network is made up of many disparate IP CIDRs that cannot be easily summarized, or if there is a requirement for IP duplication within the enterprise network and the SDDC.
In most cases, the network(s) where core on-premises services run (Active Directory, DNS, vCenter, ESXi vMotion & Management IPs), as well as any networks where administrators who will access the SDDC vCenter are, would all be included in the MGW IPSec VPN remote subnets.
For routed networks (networks that are uniquely on the SDDC), any subnets where on-premises application servers that require connectivity to VMs in the SDDC, as well as any networks where end-user devices that will connect to VMs in the SDDC should be included in the CGW IPSec VPN remote subnets.
It is also possible to extend or stretch an existing on-premises network into a Compute Network in the SDDC using L2VPN. These networks are not included in the SDDC CGW IPSec VPN remote subnets, as they appear as direct Layer 2 (L2) connections to the on-premises VLAN they are mapped to. If NSX is installed in the on-premises environment, then the SDDC compute network can be mapped to an NSX Logical Network or a VLAN.