Back to OvDC Network Resource Pools
We said that OvDC
Network can have different types of Portgroups in vCenter Infrastructure
environment based on OvDC Network Resource Pool type used. What does this mean?
While creating OvDC
network, you need to select whether its directly connected to External network
or connected through vSE. In case vSE is selected, you need to select the type
of network resource pool to be used. The type of network resource pool will impact
the way how vDS Portgroups operate but from vCenter Infrastructure point of
view, it will be always a typical vDS Portgroup.
There are four types
of network resource pools:
2. VLAN Backed: In this type, you need to define a range of VLANs in network resource pool. During OvDC Network configuration, assuming this type is selected, the next available VLAN will be selected and new Portgroup will be created in vSphere using this VLAN. This is one-to-one mapping, i.e. each OvDC Net can be mapped to one Portgroup only and vice-versa.
3. vCDNI Backed
I
would like to brief on what is vCDNI. vCDNI is new protocol developed by VMware
(not standardized by IEEE/ITU) which is
used to encapsulate Ethernet frames on VMware Lab Manager (VLM) frames. VLM
frames are having Ether-Type as '88de'.
The header size of VLM Frames is 24-bytes, i.e. the overall frame size will be
1524 bytes.
Note: This is different than
MAC-in-MAC protocol (802.1ah)
By what is the purpose of this encapsulation?
In
multi-tenant cloud environment, you need to have some sort of isolation between
tenants. This is originally implemented using VLANs. However, we will be
limited to 4094 tenants which is the max number of VLANs.
vCDNI
basically is similar to private VLANs concept where you will have one primary
VLAN but instead of secondary VLANs, you will use vCDNI IDs. In this case
multiple tenants will be having same VLAN but different vCDNIs. Tenants with
same vCDNI IDs will be able to speak while different vCDNI IDs will require a
device to inter-vCDNI routing (this device is vSE).
Using
this technology, the max number of tenants will be 4M (1024 IDs per VLAN)
theoretically.
How is it implemented?
VMware
vDS switch is having code component called PGI which
is responsible for generating vCDNI IDs,
calculating new MAC addresses for VLM header based on vCDNI IDs, and
encapsulating Ethernet Frames in VLM Frames. When a frame reaches vDS, it will
check ether-type to verify whether its normal Ethernet frame of VLM frame. In
case its VLM Frame, vDS will do reverse calculation to get vCDNI ID from MAC
addresses. In case vCDNI IDs are matching, vDS will forward the frame to
destination VM based on original MAC, else frame is dropped.
In
case VMs are sharing vCDNI IDs across hosts, the frame will be forwarded across
physical switches to destination host. Therefore,
physical switches should have MTU value increased to 1524-bytes.
Typical vCD Traffic Flow
Coming
back to OvDC Networks, while configuring them, assuming vCDNI Backed Pool is
selected, a Portgroup will be created having the shared primary VLAN and using
the next available vCDNI ID.
Disadvantages of vCDNI (Thanks to Ivan
Pepelnjak for highlighting this):
While it sounds good feature,
here are the challenges I saw during testing:
a. VLM frame MAC is calculate
based in vCDNI ID, VLAN ID, source and destination MAC addresses. The
calculation algorith isn't documented anywhere by VMware. Therefore, looking
from uplink switch, there is no way to track the VM based on its MAC address.
This is sometimes required for network troubleshooting when you have a faulty
VM.
b. Its not at all secured when
coming to broadcast and multicast traffic.
- When a
broadcast message is sent, the destination MAC is FFFF.FFFF.FFFF. When vDS is
doing the calculation to generate VLM MAC addresses, VLM destination MAC will
remain FFFF.FFFF.FFFF. This is very
dangerous because the destination vDS switches won't be able to extract vCDNI
ID from VLM MACs and accordingly will flood the frame to all tenants in all
vCDNI IDs sharing same VLAN.
Although have
separate VLAN for each tenant isn't scalable but such kind of broadcast storms
is limited per VLAN, i.e. per tenant. However, with vCDNI since tenants are
sharing same VLAN but different vCDNI IDs, we have 1024 tenants impacted by
broadcast storm generated by one infected VM.
Below example
for simple ARP broadcast request.
- In case of
multicast, vCDNI technology bypass IGMP snooping functionality. IGMP messages
are encapsulated in VLM frames and switches can't read them to implement IGMP
snooping table. In addition, multicast destination MAC will remain same in VLM
frame destination MAC (i.e. destination vDS won't be able to extract vCDNI ID).
Combing both points, each multicast frame will be flooded to all tenants in all
vCDNIs within same VLAN.
4. VXLAN Backed
(will be covered in separate post)
What are Isolated Networks?
We mentioned in previous post that
during OvDC network & vApp network creation, you can have it directly
connected to upper network or through vSE. There is a third type listed which
is called Isolated Network. In this case,
you will create OvDC/vApp Network which isn't connected to Upper Network. This
can be sometimes useful when you create Organizations for testing and
development.
From vCenter
Infrastructure side, new Portgroup will be created as well as new vSE VA.
However, this vSE isn't connected to any external Portgroup.
vApp Network Fencing?
This is a
combination of Direct and Routed vApp Networks. When this option is enabled,
vApp Network will be connected to OvDC Network using Transparent Firewall
(Routed). This transparent firewall is vSE. From the other side, the IP Pool
defined in OvDC Network will be extended to vApp Network to be used by VMs
(Direct). Also, the gateway of the VMs will be OvDC Network.
Fencing allow the
usage of overlapping IP addresses for different VMs in different vApp Networks.
Once vSE detects overlapping IPs, it will NAT one of them to different IP in
the same pool to avoid conflict at OvDC Network level.
To configure
Fencing, vApp Network should be Direct. If vApp Network is Routed, Fencing will
be greyed-out.
From vCenter
Infrastructure, Portgroups will be created for each vApp Fenced Network and vSE
VA will be deployed for each fenced vApp Network.
Notes:
- You can overwrite fencing setting at VM level while deploying it
in the vApp.
- You can't overlap IPs within same Fenced vApp Network.
- When you create Fenced vApp Network, you can't create any other
Direct vApp Network
These types of accidents are so preventable. Some dangerous things are so obvious yet nobody ever thinks about them, so kids wind up dead because of it.vinyl fence companies near me
ReplyDelete