VCP6-NV Study Notes – Section 3: Configure and Manage vSphere Networking–Part 2
Objective 3.2: Configure and Manage vDS Policies
Compare and contrast common vDS policies
Virtual Switch Objects Where Policies Apply for vSphere Distributed Switch:
- Distributed port group –> When you apply policies on a distributed port group, the policies are propagated to all ports in the group
- Distributed Port –> You can apply different policies on individual distributed ports by overriding the policies that are inherited from the distributed port group
- Uplink Port Group –> You can apply policies at uplink port group level, and the are policies are propagated to all ports in the group
- Uplink port –> You can apply different policies on individual uplink ports by overriding the policies that are inherited from the uplink port group
Available policies:
- Teaming and Failover –> configure the physical NICs that handle the network traffic
- Security –> Provides protection of traffic against MAC address impersonation and unwanted port scanning
- Traffic shaping –> restrict the network bandwidth that is available to ports, but also to allow bursts of traffic to flow through at higher speeds
- VLAN –> Lets you configure the VLAN tagging:
- External switch tagging (EST)
- Virtual switch tagging (VST)
- Virtual Guest tagging (VGT)
- Monitoring –> Enables and disables NetFlow monitoring on a distributed port or port group
- Traffic Filtering and Marking –> Lest you protect the virtual network from unwanted traffic and security attacks or apply a QoS tag to a certain traffic type
- Resource Allocation –> associate a distributed port or port group with a user-defined network resource pool. In this way, you can better control the bandwidth that is available to the port or port group
- Port Blocking –> Lets you selectively block ports from sending and receiving data.
Source: https://docs.vmware.com/en/VMware-vSphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf
Configure dvPortgroup blocking policies
Port blocking policies allow you to selectively block ports from sending or receiving data.
Procedure:
- navigate to the distributed switch
- Right-click the distributed switch in the object navigator and select Distributed Port Group > Manage Distributed Port Groups
- Select the Miscellaneous check box and click Next
- Select one or more distributed port group to configure and click Next
- From the Block all ports drop-down menu, enable or disable port blocking, and click Next
Source: https://docs.vmware.com/en/VMware-vSphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf
Explain benefits of Multi-Instance TCP/IP stack
Starting from vSphere 5.5, it is possible to configure multiple TCP/IP stacks for certain traffic type only. Custom TCP/IP stacks can be used to handle the network traffic of other applications and services, which may require separate DNS and default gateway configurations.
Benefits:
- Separate memory Heap
- Custom ARP Table
- Custom Routing Table
- Improved network isolation
Procedure:
- Open an SSH connection to the host
- Log in a root user
- Run: esxcli network ip netstack add -N=”stack_name”
Source: https://docs.vmware.com/en/VMware-vSphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf
Useful resource: http://www.vstellar.com/2017/09/17/configuring-and-managing-vmkernel-tcpip-stacks/
Configure load balancing and failover policies
It’s possible configure various load balancing algorithms on a virtual switch to determine how network traffic is distributed between the physical NICs in a team:
- Route Based on Originating Virtual Port
- selects uplink based on virtual machine port ID
- each VM has an associated port ID –> to calculate an uplink VM uses the port ID and the number of uplinks in the NIC team.
- after selected it always forwards traffic through the same uplink as long as the VM runs on the same port.
- vSwitch recalculate the if at least a single NIC is added or removed from the team.
- Advantages
- event traffic distribution if virtual vnic are greater than pnic in the team
- Low resource consumption
- No changes on physical switch is required
- Disadvantages:
- vSwitch is not aware of traffic load on the uplink
- Available bandwidth for VM is limited to speed of the associated portID
- Route based on Source MAC Hash
- vSwitch selects an uplink for a virtual machine based on the virtual machine MAC address. To calculate an uplink for a virtual machine, the virtual switch uses the virtual machine MAC address and the number of uplinks in the NIC team
- Advantages:
- A more even distribution of the traffic than Route Based on Originating Virtual Port, because the virtual switch calculates an uplink for every packet
- Virtual machines use the same uplink because the MAC address is static
- No changes on the physical switch are required
- Disadvantages:
- The bandwidth that is available to a virtual machine is limited to the speed of the uplink that is associated with the relevant port ID, unless the virtual machine uses multiple source MAC addresses
- Higher resource consumption than Route Based on Originating Virtual Port, because the virtual switch calculates an uplink for every packet
- vSwitch is not aware of traffic load on the uplink
- Route Based on IP Hash
- The virtual switch selects uplinks for virtual machines based on the source and destination IP address of each packet
- To calculate an uplink for a virtual machine, the virtual switch takes the last octet of both source and destination IP addresses in the packet, puts them through a XOR operation, and then runs the result through another calculation based on the number of uplinks in the NIC team
- Physical Switch Configuration
- To ensure that IP hash load balancing works correctly, you must have an Etherchannel configured on the physical switch
- Limitations and Configuration Requirements
- ESXi hosts support IP hash teaming on a single physical switch or stacked switches
- ESXi hosts support only 802.3ad link aggregation in Static mode –> LACP is only supported in vDS >= 5.1 and CISCO Nexus 1000V
- You must use Link Status Only as network failure detection with IP hash load balancing
- You must set all uplinks from the team in the Active failover list
- The number of ports in the Etherchannel must be same as the number of uplinks in the team
- Advantages
- A more even distribution of the load compared to Route Based on Originating Virtual Port and Route Based on Source MAC Hash, as the virtual switch calculates the uplink for every packet
- A potentially higher throughput for virtual machines that communicate with multiple IP addresses
- Disadvantages
- Highest resource consumption compared to the other load balancing algorithms
- The virtual switch is not aware of the actual load of the uplinks
- Requires changes on the physical network
- Complex to troubleshoot
- Route Based on Physical NIC Load
- is based on Route Based on Originating Virtual Port, where the virtual switch checks the actual load of the uplinks and takes steps to reduce it on overloaded uplinks.
- Advantages
- Low resource consumption because the distributed switch calculates uplinks for virtual machines only once and checking the of uplinks has minimal impact
- The distributed switch is aware of the load of uplinks and takes care to reduce it if needed.
- No changes on the physical switch are required.
- Disadvantages
- The bandwidth that is available to virtual machines is limited to the uplinks that are connected to the distributed switch
- Use Explicit Failover Order –> No actual load balancing is available with this policy
Source: https://docs.vmware.com/en/VMware-vSphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf
Configure VLAN settings
A virtual local area network (VLAN) is a group of hosts with a common set of requirements, which communicate as if they were attached to the same broadcast domain, regardless of their physical location.
To apply VLAN tagging globally on all distributed ports, you must set the VLAN policy on a distributed port group
- the vSphere Web Client, navigate to the distributed switch.
- Navigate to the VLAN policy on the distributed port group or distributed port
- Distributed Port Group
- From the Actions menu, select Distributed Port Group > Manage Distributed Port Groups
- Select VLAN
- Select Port Group
- Distributed Port
- Select Related Object, and select Distributed Port Groups
- Select a distributed port group
- Under Manage select Ports
- Select port and click edit distributed port settings
- select VLAN
- Select override next to the properties to override
- Distributed Port Group
- From the Type drop-down menu, select the type of VLAN traffic filtering and marking, and click Next
- Note
- VLAN
- VLAN Trunking
- Private VLAN
To configure VLAN traffic processing generally for all member uplinks, you must set the VLAN policy on an uplink port. Use the VLAN policy at the uplink port level to propagate a trunk range of VLAN IDs to the physical network adapters for traffic filtering. The physical network adapters drop the packets from the other VLANs if the adapters support filtering by VLAN
Procedure:
- In the vSphere Web Client, navigate to a distributed switch
- Click uplink port group
- VLAN
- Uplink port group
- Right click an uplink and click edit settings
- Click VLAN
- Uplink port
- Click an uplink port group
- Select Manage –> select Ports
- Select port and click edit settings
- Click VLAN Override
- Uplink port group
- Type VLAN trunk range value or several range with commas
Source: https://docs.vmware.com/en/VMware-vSphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf
Configure traffic shaping policies
Procedure:
- In the vSphere Web Client, navigate to the host.
- On the Manage tab, click Networking, and select Virtual switches
- Select the standard switch where the port group resides
- In the topology diagram, select a standard port group.
- Click Edit settings
- Select Security and select Override next to the options to override
- Reject or accept
- Promiscuous mode
- MAC address changes
- Forged transmits
Source: https://docs.vmware.com/en/VMware-vSphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf
Enable TCP Segmentation Offload (TSO) support for a virtual machine
Use TCP Segmentation Offload (TSO) in VMkernel network adapters and virtual machines to improve the network performance in workloads that have severe latency requirements.
TSO on the transmission path of physical network adapters, and VMkernel and virtual machine network adapters improves the performance of ESXi hosts by reducing the overhead of the CPU for TCP/IP network operations. When TSO is enabled, the network adapter divides larger data chunks into TCP segments instead of the CPU. The VMkernel and the guest operating system can use more CPU cycles to run applications
Simulator: run these esxcli network nic software set console commands to enable or disable the software simulation of TSO in the VMkernel:
- Enable:
- esxcli network nic software set –ipv4tso=1 -n vmnicX
- esxcli network nic software set –ipv6tso=1 -n vmnicX
- Disable:
- esxcli network nic software set –ipv4tso=0 -n vmnicX
- esxcli network nic software set –ipv6tso=0 -n vmnicX
To determine if TSO is supported on the physical nic: esxcli network nic tso get
Procedure to enabling TSO on ESXi host:
- In the vSphere Web Client, navigate to the host
- Manage –> settings
- Expand the System section and click Advanced System Settings
- Edit the value of the Net.UseHwTSO parameter for IPv4 and of Net.UseHwTSO6 for IPv6 and click OK to apply changes
- To reload the driver module of the physical adapter, run the esxcli system module set console command in the ESXi Shell on the host
Procedure for VM:
- Linux Machine
- Verify that ESXi 6.0 supports the Linux guest operating system
- Verify that the network adapter on the Linux virtual machine is VMXNET2 or VMXNET3.
- To enable TSO, run the following command: ethtool -K ethY tso on
- To disable TSO, run the following command: ethtool -K ethY tso off
- Windows Machine
- Verify that ESXi 6.0 supports the Linux guest operating system
- Verify that the network adapter on the Windows virtual machine is VMXNET2 or VMXNET3.
- In the Network and Sharing Center on the Windows control panel, click the name of the network adapter
- Click its name –> Properties –> Configure
- On the Advanced tab, set the Large Send Offload V2 (IPv4) and Large Send Offload V2 (IPv6) properties to Enabled or Disabled
- Restart VM
Source: https://docs.vmware.com/en/VMware-vSphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf
Enable Jumbo Frame support on appropriate components
Jumbo frames let ESXi hosts send larger frames out onto the physical network. The network must support jumbo frames end-to-end that includes physical network adapters, physical switches, and storage devices.
Note: Before enabling jumbo frames, check with your hardware vendor to ensure that your physical network adapter supports jumbo frames
You can enable jumbo frames on a vSphere distributed switch or vSphere standard switch by changing the maximum transmission unit (MTU) to a value greater than 1500 bytes. 9000 bytes is the maximum frame size that you can configure.
Procedure for vDS:
- In the vSphere Web Client, navigate to the distributed switch.
- On the Manage tab, click Settings and select Properties
- Click Edit
- Click Advanced and set the MTU property to a value greater than 1500 bytes. Max 9000 bytes
Procedure for vmKernel:
- In the vSphere Web Client, navigate to the host
- Under Manage, select Networkingand then select VMkernel adapters
- Select a VMkernel adapter from the adapter table
- Click the name of the VMkernel adapter.
- Click Edit
- Select NIC settings and set the MTU property to a value greater than 1500. Max 9000 bytes
Procedure on VM:
- Locate the virtual machine in the vSphere Web Client.
- On the Manage tab of the virtual machine, select Settings –> VM Hardware
- Click Edit and click the Virtual Hardware tab
- Click the Virtual Hardware section, and expand the network adapter section. Record the network settings and MAC address that the network adapter is using.
- Click Remove to remove the network adapter from the virtual machine.
- From the New device drop-down menu, select Network and click Add
- From the Adapter Type drop-down menu, select VMXNET 2 (Enhanced) or VMXNET 3
- Set the network settings to the ones recorded for the old network adapter
- Set the MAC Address to Manual, and type the MAC address that the old network adapter was using.
- Inside the guest operating system, configure the network adapter to allow jumbo frames. See the documentation of your guest operating system.
- Configure all physical switches and any physical or virtual machines to which this virtual machine connects to support jumbo frames.
Source: https://docs.vmware.com/en/VMware-vSphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf
Determine appropriate VLAN configuration for a vSphere implementation
The VLAN configuration in a vSphere environment provides certain benefits:
- Integrates ESXi hosts into a pre-existing VLAN topology.
- Isolates and secures network traffic.
- Reduces congestion of network traffic.
Private VLANs are used to solve VLAN ID limitations by adding a further segmentation of the logical broadcast domain into multiple smaller broadcast subdomains.
Source: https://docs.vmware.com/en/VMware-vSphere/6.0/vsphere-esxi-vcenter-server-602-networking-guide.pdf
Understand how DSCP is handled in a VXLAN frame
Virtualized environments must carry various types of traffic including tenant, storage and management. Each traffic type has different characteristics and applies different demands on the physical switching infrastructure. Different tenants’ traffic carries different quality of service (QoS) values across the fabric.
There are two types of QoS configuration supported in the physical switching infrastructure; one is handled at L2,and the other at the L3 or IP layer. L2 QoS is sometimes referred to as “Class of Service” (CoS) and the L3 QoS as “DSCP marking”
Differentiated services or DiffServ is a computer networking architecture that specifies a simple and scalable mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. It is a 8bit header and 6bit of this is used as Differentiated Services Code Point (DSCP) for packet classification purpose.
NSX allows for trusting the DSCP marking originally applied by a virtual machine or explicitly modifying and setting the DSCP value at the logical switch level. In each case, the DSCP value is then propagated to the outer IP header of VXLAN encapsulated frames. This enables the external physical network to prioritize the traffic based on the DSCP setting on the external header.