Traffic Engineering in HCX Enterprise

Take your network paths to an even greater level of performance. Join us in this post, as we explore two traffic engineering technologies that optimize the resiliency of your network paths and allow you to use them more efficiently, with the Network Extension service. Find out how HCX Enterprise achieves this and more, to give you a functionality beyond that of HCX Advanced. 

This blog is part of series. Part 1 | Part 2 | Part 3 | Part 4

In a recent post I described VMware HCX service appliance relationships. HCX deploys the selected services in initiator/receiver pairs. These pairs are automatically connected to each other. Regardless of the network underlay, the HCX transport tunnel is dynamically built. 

 

The IT industry has a growing expectation that all services will work well in the sea of variations in data center networks and the internet. That expectation includes the HCX transport service. We’ve built the new HCX Traffic Engineering features: TCP Flow Conditioning and Application Path Resiliency to do just that. 

 

TCP Flow Conditioning

VMware customers use the HCX Network Extension service for turn-key connectivity between their legacy, public and private clouds. This service allows Distributed Virtual Switch, NSX or 3rd Party Virtual Switch based networks to be bridged to the target environment without hassle. The process is a simple right-click operation with three parameters. With a Network Extension in place, the applications moved to the target can communicate with the original network without awareness of HCX, which is silently encapsulating traffic and forwarding Extended Network bound traffic.  

Any two communicating virtual machines use their interface MTU setting to negotiate the maximum size of any data segment.  The virtual machines use their own interface MTU, but have no awareness any upstream network MTU, or of any encapsulation or tunneling operations.   


 

Below is an example of what the segment negotiation may look like (note that the VMs are only aware of their own vNIC MTU during this process). During the TCP handshake, using the TCP Header Options, each virtual machine defines the maximum amount of data they want to receive in any individual packet/segment. The lowest proposed value is used. 

To simplify the rest of the explanation, let’s assume the following scenario:

  1. The negotiated value is 1500, and the transmitting virtual machine “TX-VM” needs to send 9000 bytes of data to the target “RX-VM”.  
  2. Virtual Machines and HCX appliances use 1500 MTU.

Here is what happens:

Without TCP Flow Conditioning

  1. 9000 bytes will be carved and transmitted as six 1500 byte segments.
  2. Network Extension encapsulation will add 150 bytes, creating 1650 segments.  
  3. The single 1650 segment is fragmented into two segments 1500 + 150 bytes to respect the uplink MTU. 
  4. 12 packets are sent over the network, averaging 750 bytes per packet. 

With TCP Flow Conditioning enabled:

  1. HCX Network Extension intercepts the TCP handshake and adjusts the Maximum segment dynamically. In this scenario, the maximum segment is set to 1350 instead of 1500.
  2. 9000 bytes will be carved and transmitted as six 1350 byte packets and one 900 byte packet.
  3. Network Extension encapsulation will add 150 bytes, creating 1500 byte segments.  
  4. The single 1500 byte segment is within the 1500 MTU and is not fragmented. 
  5. 7 packets are sent over the network, averaging 1285 bytes per packet.

 

In a nutshell, TCP Flow Conditioning dynamically optimizes the segment size for traffic traversing the Network Extension path.  

HCX Application Path Resiliency

In today’s technology landscape virtual machines communicate without any visibility to the underlying networks. These networks are often built to be resilient through the use of aggregated switched paths (e.g. LACP) and Equal-Cost Multi-Path-based Dynamic Routing paths.

Visibility to these resilient multi-paths is not necessary for applications as long as the source address can reach the target address. The switched path will use a Load Balanced hashing mechanism to pin the traversed path and ensure minimal need to reorder transmitted packets (which is all a good thing)! 

The HCX Interconnect and Network Extension transport in HCX Enterprise take advantage of these existing multi-path hashing behaviors to achieve a powerful improvement.  HCX Application Path Resiliency technology creates multiple FOU tunnels between the source and destination Uplink IP pair for improved performance and resiliency and path diversity. 

Implementing the Traffic Engineering Features

Both of the explored features will be enabled without additional configurations in HCX Enterprise deployments whenever the feature-including update is applied to an existing HCX Service Mesh. 

About the Authors

Gabe Rosas

Technical Product Manager at VMware

Gabe Rosas is the Technical Product Manager for HCX. He has been with VMware since 2012. He has expertise with designing and operating traditional data center and virtualization infrastructure, and networks. Gabe is proudly a father of six, the “half-dozen dad”. He is an ultra-running and OCR enthusiast. You can follow Gabe on his HCX related blog at hcx.design, or on Twitter @gabe_rosas.

Leave a Reply

Your email address will not be published. Required fields are marked *