Tuesday, December 4, 2012

10 things to know about the Nexus 1000V


  1. At the command line, accessible by telnet or SSH, the Nexus 1000 switch feels just like any other Catalyst or Nexus chassis switch you’ve ever configured. Similarly, it can be managed and monitored via SNMP. Cisco provides SNMP MIBs to supplement these services.
  2. To the Nexus 1000V, participating vSphere servers (or hosts) appear as individual modules much like you would see in a Catalyst 6500 chassis. You will notice, however, that the module count and the virtual port counts associated with each module can scale up much, much higher than you would see in an isolated physical chassis.
  3. The Nexus 1000V was co-developed by Cisco and VMware and can be purchased from either company through resellers. It’s priced per physical CPU – essentially, based on the total count of CPU’s in each VEM (VSphere host)
  4. The Nexus 1000V Virtual Supervisor Module (VMS) plays much the same role in a Nexus 1000V environment as the Supervisor engine in a Nexus 7000 or Catalyst 6500 chassis. However, the 1000V Supervisor Engine is a Virtual Machine hosted on an ESX server. And, as is the case of a physical chassis, it can be implemented in a high availability design with a Standby Virtual Supervisor module existing on a separate ESx host
  5. When a VMware Administrator ties a VMware guest (virtual server) into the Nexus 1000V, a Virtual Ethernet Port is created and associated with that virtual server. That virtual ethernet port then stays with the virtual server even after the server is vmotion’d to another physical server, and is configurable just like a physical port
  6. When you hear about policies tied to Nexus 1000V virtual interfaces, these policies usually consist of one or more of the following attributes:, VLAN, Port Channels, Private VLAN, ACL, Port Security, Netflow, Rate Limiting, and QoS Marking
  7. Network admins are accustomed to creating port channels between network devices. Now they can create them between Nexus 1000V enabled servers and physical network devices using exactly the same commands, even on the server side
  8. Network admins can SPAN and even RSPAN traffic to a network analyzer to troubleshoot network issues down to the specific guest virtual port. This could be done before on the physical port of the ESX server but at the cost of having to filter this traffic to single out the guest(s) VM’s traffic
  9. In the past, server admins were worried about bottlenecks if they gave network admins access within their ESX hosts. This wasn’t necessarily the case since the network configuration tasks (i.e. VLAN, QoS, etc.) have always been required. Server admins are now dynamically presented with network configuration information through the single vSphere GUI using the vSphere/Cisco API
  10. vSphere vSwitches are local to each host as is the configuration on each switch. The distributed nature of the Nexus 1000v across all vSphere hosts now allows admins to configure VLAN’s (in addition to some of the newer Nexus 1000v features) once and have them available to all the hosts within vCenter.
 

No comments:

Post a Comment