Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Saturday, April 30, 2011

JUNOS Switching Basics

Juniper CLI

Connecting through console,
When you login to a juniper box, (username: root, without password), BSD prompt appears:
root@%
you have several unix command to control the box, for those who know BSD, that's enough:
root@% pwd
/root
root@% cli
root>

CLI command (above) brings you to JUNOS shell:

root>

root> ?
Possible completions:

clear Clear information in the system
configure Manipulate software configuration information
file Perform file operations
help Provide help information
monitor Show real-time debugging information
mtrace Trace multicast path from source to receiver
op Invoke an operation script
ping Ping remote target
quit Exit the management session
request Make system-level requests
restart Restart software process
set Set CLI properties, date/time, craft interface message
show Show system information
ssh Start secure shell on another host
start Start shell
telnet Telnet to another host
test Perform diagnostic debugging
traceroute Trace route to remote host

This is simple clean configuration of Juniper EX4200:

root> show configuration
## Last commit: 2008-07-27 14:13:35 UTC by root
version 9.0R2.10;
system {
root-authentication {
encrypted-password "$1$UyFEalx8$EvbHYVOcKL/vaVwdQfvMN/"; ## SECRET-DATA
}
syslog {
user * {
any emergency;
}
file messages {
any notice;
authorization info;
}
file interactive-commands {
interactive-commands any;
}
}
}
interfaces {
ge-0/0/0 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/1 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/2 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/3 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/4 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/5 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/6 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/7 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/8 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/9 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/10 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/11 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/12 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/13 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/14 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/15 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/16 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/17 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/18 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/19 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/20 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/21 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/22 {
unit 0 {
family ethernet-switching;
}
}
ge-0/0/23 {
unit 0 {
family ethernet-switching;
}
}
ge-0/1/0 {
unit 0 {
family ethernet-switching;
}
}
xe-0/1/0 {
unit 0 {
family ethernet-switching;
}
}
ge-0/1/1 {
unit 0 {
family ethernet-switching;
}
}
xe-0/1/1 {
unit 0 {
family ethernet-switching;
}
}
ge-0/1/2 {
unit 0 {
family ethernet-switching;
}
}
ge-0/1/3 {
unit 0 {
family ethernet-switching;
}
}
vme {
unit 0 {
family inet {
address 192.168.1.253/24;
}
}
}
}
protocols {
lldp {
interface all;
}
rstp;
}
virtual-chassis {
member 0 {
mastership-priority 255;
}
}
poe {
interface all;
}

root>
Yes, This is a new EX4200 Switch

root> show version
Model: ex4200-24t
JUNOS Base OS boot [9.0R2.10]
JUNOS Base OS Software Suite [9.0R2.10]
JUNOS Kernel Software Suite [9.0R2.10]
JUNOS Crypto Software Suite [9.0R2.10]
JUNOS Online Documentation [9.0R2.10]
JUNOS Enterprise Software Suite [9.0R2.10]
JUNOS Packet Forwarding Engine Enterprise Software Suite [9.0R2.10]
JUNOS Routing Software Suite [9.0R2.10]
JUNOS Web Management [9.0R2.10]

Let's look at default VLAN

root> show vlans
Name Tag Interfaces
default
ge-0/0/0.0, ge-0/0/1.0*, ge-0/0/2.0*, ge-0/0/3.0, ge-0/0/22.0, ge-0/0/23.0,
xe-0/1/0.0, xe-0/1/1.0
mgmt
me0.0*

root>

It runs RSTP by default (Cisco runs PVST+)

root> show spanning-tree bridge brief
STP bridge parameters
Context ID : 0
Enabled protocol : RSTP
Root ID : 32768.00:1f:12:31:b8:40
Hello time : 2 seconds
Maximum age : 20 seconds
Forward delay : 15 seconds
Message age : 0
Number of topology changes : 0
Local parameters
Bridge ID : 32768.00:1f:12:31:b8:40
Extended system ID : 0
Internal instance ID : 0
root> show spanning-tree interface
Spanning tree interface parameters for instance 0

Interface Port ID Designated port ID Designated bridge ID Port Cost State Role

ge-0/0/0.0 128:513 128:513 32768.001f1231b840 20000 BLK DIS
ge-0/0/1.0 128:514 128:514 32768.001f1231b780 20000 FWD ROOT
ge-0/0/2.0 128:515 128:515 32768.001f1231b780 20000 BLK ALT

How to configure it?


root>

root> edit
Entering configuration mode
Users currently editing the configuration:
root terminal p0 (pid 698) on since 2008-07-27 14:08:30 UTC, idle 00:07:02
[edit]

[edit]
root#
root#
after making required change:

root# commit
commit complete

[edit]

Friday, April 29, 2011

Understanding NPIV and NPV

By Scott

Two technologies that seem to have come to the fore recently are NPIV (N_Port ID Virtualization) and NPV (N_Port Virtualization). Judging just by the names, you might think that these two technologies are the same thing. While they are related in some aspects and can be used in a complementary way, they are quite different. First, though, I need to cover some basics. This is unnecessary for those of you that are Fibre Channel experts, but for the rest of the world it might be useful:
  • N_Port: An N_Port is an end node port on the Fibre Channel fabric. This could be an HBA (Host Bus Adapter) in a server or a target port on a storage array.
  • F_Port: An F_Port is a port on a Fibre Channel switch that is connected to an N_Port. So, the port into which a server’s HBA or a storage array’s target port is connected is an F_Port.
  • E_Port: An E_Port is a port on a Fibre Channel switch that is connected to another Fibre Channel switch. The connection between two E_Ports forms an Inter-Switch Link (ISL).
There are other types of ports as well—NL_Port, FL_Port, G_Port, TE_Port—but for the purposes of this discussion these three will get us started. With these definitions in mind, I’ll start by discussing N_Port ID Virtualization (NPIV).

N_Port ID Virtualization (NPIV)

Normally, an N_Port would have a single N_Port_ID associated with it; this N_Port_ID is a 24-bit address assigned by the Fibre Channel switch during the FLOGI process. The N_Port_ID is not the same as the World Wide Port Name (WWPN), although there is typically a one-to-one relationship between WWPN and N_Port_ID. Thus, for any given physical N_Port, there would be exactly one WWPN and one N_Port_ID associated with it.

What NPIV does is allow a single physical N_Port to have multiple WWPNs, and therefore multiple N_Port_IDs, associated with it. After the normal FLOGI process, an NPIV-enabled physical N_Port can subsequently issue additional commands to register more WWPNs and receive more N_Port_IDs (one for each WWPN). The Fibre Channel switch must also support NPIV, as the F_Port on the other end of the link would “see” multiple WWPNs and multiple N_Port_IDs coming from the host and must know how to handle this behavior.

Once all the applicable WWPNs have been registered, each of these WWPNs can be used for SAN zoning or LUN presentation. There is no distinction between the physical WWPN and the virtual WWPNs; they all behave in exactly the same fashion and you can use them in exactly the same ways.
So why might this functionality be useful? Consider a virtualized environment, where you would like to be able to present a LUN via Fibre Channel to a specific virtual machine only:
  • Without NPIV, it’s not possible because the N_Port on the physical host would have only a single WWPN (and N_Port_ID). Any LUNs would have to be zoned and presented to this single WWPN. Because all VMs would be sharing the same WWPN on the one single physical N_Port, any LUNs zoned to this WWPN would be visible to all VMs on that host because all VMs are using the same physical N_Port, same WWPN, and same N_Port_ID.
  • With NPIV, the physical N_Port can register additional WWPNs (and N_Port_IDs). Each VM can have its own WWPN. When you build SAN zones and present LUNs using the VM-specific WWPN, then the LUNs will only be visible to that VM and not to any other VMs.
Virtualization is not the only use case for NPIV, although it is certainly one of the easiest to understand.
<aside>As an aside, it’s interesting to me that VMotion works and is supported with NPIV as long as the RDMs and all associated VMDKs are in the same datastore. Looking at how the physical N_Port has the additional WWPNs and N_Port_IDs associated with it, you’d think that VMotion wouldn’t work. I wonder: does the HBA on the destination ESX/ESXi host have to “re-register” the WWPNs and N_Port_IDs on that physical N_Port as part of the VMotion process?</aside>

Now that I’ve discussed NPIV, I’d like to turn the discussion to N_Port Virtualization (NPV).

N_Port Virtualization

While NPIV is primarily a host-based solution, NPV is primarily a switch-based technology. It is designed to reduce switch management and overhead in larger SAN deployments. Consider that every Fibre Channel switch in a fabric needs a different domain ID, and that the total number of domain IDs in a fabric is limited. In some cases, this limit can be fairly low depending upon the devices attached to the fabric. The problem, though, is that you often need to add Fibre Channel switches in order to scale the size of your fabric. There is therefore an inherent conflict between trying to reduce the overall number of switches in order to keep the domain ID count low while also needing to add switches in order to have a sufficiently high port count. NPV is intended to help address this problem.
NPV introduces a new type of Fibre Channel port, the NP_Port. The NP_Port connects to an F_Port and acts as a proxy for other N_Ports on the NPV-enabled switch. Essentially, the NP_Port “looks” like an NPIV-enabled host to the F_Port on the other end. An NPV-enabled switch will register additional WWPNs (and receive additional N_Port_IDs) via NPIV on behalf of the N_Ports connected to it. The physical N_Ports don’t have any knowledge this is occurring and don’t need any support for it; it’s all handled by the NPV-enabled switch.

Obviously, this means that the upstream Fibre Channel switch must support NPIV, since the NP_Port “looks” and “acts” like an NPIV-enabled host to the upstream F_Port. Additionally, because the NPV-enabled switch now looks like an end host, it no longer needs a domain ID to participate in the Fibre Channel fabric. Using NPV, you can add switches and ports to your fabric without adding domain IDs.

So why is this functionality useful? There is the immediate benefit of being able to scale your Fibre Channel fabric without having to add domain IDs, yes, but in what sorts of environments might this be particularly useful? Consider a blade server environment, like an HP c7000 chassis, where there are Fibre Channel switches in the back of the chassis. By using NPV on these switches, you can add them to your fabric without having to assign a domain ID to each and every one of them.

Here’s another example. Consider an environment where you are mixing different types of Fibre Channel switches and are concerned about interoperability. As long as there is NPIV support, you can enable NPV on one set of switches. The NPV-enabled switches will then act like NPIV-enabled hosts, and you won’t have to worry about connecting E_Ports and creating ISLs between different brands of Fibre Channel switches.

Thursday, April 28, 2011

Kansas City first to get Google’s 1Gbps Internet


In February last year Google got cities around the U.S. excited as it introduced an offer to hook some of them up with a fiber-to-the-home 1Gbps network. While most locations struggle to offer anything past 20Mbps, Google is making the leap to 1Gbps and taking some communities with it.

A little over a year later and Google has decided on the first city it will work with to implement the super-fast network: Kansas City. When first announced, Google said it needed city representatives and the individuals who thought it should come to their city to get in touch and state their case. It seems Kansas City made the best of that and really impressed with its commitment to the project if it was selected.



A development agreement has already been signed and the network will be up and running at some point in 2012. The Kauffman Foundation, KCNext, the University of Kansas Medical Center are all involved in its development and deployment.

Google has restated its commitment to select other cities to work with and rollout 1Gbps networks to, but you have to expect that once Kansas City is up and running Google will have a much better idea of how to plan and organize city-wide projects like this. The team handling the projects will only get faster and more efficient at implementing them, and then just needs the full backing of a city to make it happen.

This could definitely set the bar for the future of U.S. Internet access if all goes well, and I don’t think anyone will complain about Google coming to their city if the end result is phenomenal connection speeds everywhere.

Wednesday, April 27, 2011

JNCIP-SP - Practice Test Available

The Juniper Networks Certification Program is pleased to announce that the JN0-660 Service Provider Routing and Switching, Professional (JNCIP-SP) Exam is live at Prometric Testing Centers worldwide beginning April 22, 2011.

For the exam description, study materials and the new JN0-660 Service Provider Routing and Switching, Professional (JNCIP-SP) practice test, please go to the

https://learningportal.juniper.net/juniper/user_activity_info.aspx?id=4712

JNCIP-SP Exam Topics

  • OSPF

  • IS-IS

  • BGP

  • Class of Service (CoS)

  • IP Multicast

  • MPLS

  • Layer 3 VPNs

  • Layer 2 VPNs

  • Automation

  • Cisco’s Data Center Launch

    By Scott

    First, let’s take a quick look at the “new product” news included in the launch:
    • Cisco is launching the Nexus 3000, a new member of the Nexus family. It’s a high density, ultra-low latency 1RU switch that supports both 1GbE as well as 10GbE. All ports are wire rate and non-blocking. At first glance, this seemed to be a bit of a conflict with some of the Catalyst switches (like the 4948, perhaps?), but the distinction here is the low latency target (approximately 1 usec port to port). It also makes sense if customers are interested in standardizing on switches running NX-OS instead of a mix of NX-OS and IOS. 

    • Cisco is also launching new Nexus 5500 series switches, the Nexus 5548UP and Nexus 5596UP. These are very much like the other members of the Nexus 5500 family, except that the “UP” designation indicates that they will support “Unified Ports”. Unified Ports are ports that can support either Fibre Channel (supporting up to 8Gbps FC), Fibre Channel over Ethernet (at 10Gbps), or Ethernet at 1Gbps. You will need to swap SFPs (for now, at least) and reconfigure the port, and you must cycle/reset the module in which the ports reside in order to switch personality. The Unified Port functionality is extremely useful and glad to see Cisco deliver it.

    • Cisco is also announced a new C-series rack mount server, the C260 M2. This is a dual-socket server running Intel Westmere CPUs, supporting up to 64 DIMM slots (using Cisco’s Memory Expansion technology) and supporting up to 16 HDDs or SSDs. It’s a pretty beefy server. From a virtualization perspective, the expanded memory footprint is pretty useful, but not so sure that the 16 drive bays is much of a selling point.

    • Also in the new product lineup are some new Catalyst 6500 modules, an ACE-30 module, an ASA service module, and a ES-40 40Gb DCI module for high-speed data center interconnects.

    • Finally, Cisco is launching an FCoE module for the MDS, which will allow customers to integrate FCoE into their networks while preserving their investment in the MDS infrastructure.
    However, the launch isn’t just about new products; if it were, it would be pretty boring, to be honest. The news included in the launch about the expansion of existing technologies is, in my mind, far more significant:
    • First, Cisco is stepping away from the “VN-Tag” moniker and focusing instead of the IEEE standardization of that functionality, a la 802.1Qbh. That’s a good move, in my opinion; they’re also going to be adding 802.1Qbg support in the future. Not such a good move (again, in my opinion) is the rebranding effort around VN-Tag and VN-Link, which are now going to be called Adapter FEX and VM-FEX. I don’t know that this new naming convention is going to be any less confusing or any more clear than the previous strategy. It is a bit more technically accurate, at least, since—if you read my post on network interface virtualization—you know that the Cisco Virtual Interface Controller (VIC), aka “Palo”, is really just a fabric extender (FEX) in a mezzanine card form factor. So, calling this Adapter FEX is a bit more accurate. VM-FEX….well, this still isn’t so clear. If any Cisco folks want to help clear things up, feel free to jump in by posting a comment to this post.

    • Cisco is adding FCoE support to the Nexus 7000 via the F1 line cards. This brings FCoE support directly to the Nexus 7000, which is a move that many expected. Along with the FCoE support, Cisco is also introducing the idea of a new type of virtual device context (VDC), the storage VDC. A storage VDC will process all FCoE traffic passing through the switch, in a completely separate fashion from LAN traffic (which would run in a separate LAN VDC). At least initially, Cisco will require the use of a storage VDC to use FCoE with the Nexus 7000; that might change over time. As with the introduction of the FCoE module for the MDS 9500, this news is interesting, but is really only useful in conjunction with other announcements. Specifically…(drum roll please)

    • In perhaps the biggest announcement of the event, Cisco is now supporting full multi-hop FCoE topologies. As I understand it, there will be a new release of NX-OS that will support the creation of VE_ports and enable multi-hop FCoE topologies (up to 7 hops supported). This functionality will exist not only on the Nexus 5000/5500, but also on the Nexus 7000 and on the FCoE module of the MDS 9500. As far as I know, this also means that any Nexus 2000 series fabric extenders that support FCoE will also now support multi-hop FCoE via their upstream Nexus switch. Cisco is going to require dedicated FCoE links between switches, at least at first, and as I mentioned earlier will require the use of a storage VDC on the Nexus 7000. I believe that this is probably the most significant announcement being made, and when taken together with other FCoE-related announcements like the FCoE module on the MDS 9500 and FCoE support on the Nexus 7000, opens up lots of new possibilities for FCoE and Unified Fabric in the data center. I, for one, am really excited to dive even deeper into the design considerations around multi-hop FCoE and Unified Fabric. Any network gearheads interested in doing a brain dump on multi-hop FCoE for me?

    • Equally important in the long-term but (apparently) not making a big impact immediately is LISP (Location ID/Separation Protocol). I’ve been talking LISP with Cisco for a while now (as I suspect many of you have), so it’s good to see the official announcement. Lots of people confuse the purpose/role of LISP when compared to OTV; they are both equally important but in very different ways. Further, LISP does not replace OTV (or vice versa). I’ll probably try my hand at a separate blog post to specifically discuss these two technologies and how the plan is for them to work hand-in-hand with each other for workload mobility. For now, it should suffice to say that OTV addresses Layer 2 connectivity between data centers while LISP helps the rest of the network more efficiently understand and adapt to the Layer 2 connectivity between data centers. Both are necessary.
    There were a few other tidbits—like the ability to run up to 6 Nexus 1000V VSMs on the Nexus 1010, or support for the Virtual Security Gateway (VSG) on the Nexus 1010—but this covers the bulk of the information.

    All in all, it’s exciting to see some of these technologies coming to light, and I’m really excited to see how the data center is going to evolve over the next couple of years. It’s a great time to be in this industry, especially if you’re a glutton for learning new technologies like me!

    Tuesday, April 26, 2011

    Setting Up FCoE on a Nexus 5000

    By Scott
    Fibre Channel over Ethernet (FCoE) is receiving a great deal of attention in the media these days. Fortunately, setting up FCoE on a Nexus 5000 series switch from Cisco isn’t too terribly complicated, so don’t be too concerned about deploying FCoE in your datacenter (assuming it makes sense for your organization). Configuring FCoE basically consists of three major steps:

    1.Enable FCoE on the switch.
    2.Map a VSAN for FCoE traffic onto a VLAN.
    3.Create virtual Fibre Channel interfaces to carry the FCoE traffic.

    The first step is incredibly easy. To enable FCoE on the switch, just use this command:

    switch(config)# feature fcoe

    The next part of the FCoE configuration is mapping a VSAN to a VLAN. What VSAN should you use? Well, if you are connecting to an existing Fibre Channel fabric, perhaps on a Cisco MDS switch, you’ll need to make sure that the VSANs between the Nexus and the MDS are appropriately matched. Otherwise, traffic on one VSAN on the Nexus won’t be able to reach devices on another VSAN on the MDS. If there’s enough demand, I’ll post a quick piece on this step as well.

    Note that this FCoE VSAN-to-VLAN mapping is a required step; if you don’t do this, the FCoE side of the interfaces won’t come up (as you’ll see later in this post). Assuming the VSAN is already defined, perform these steps to map the VSAN to a VLAN:

    switch(config)# vlan XXX
    switch(config-vlan)# fcoe vsan YYY
    switch(config-vlan)# exit

    Obviously, you’ll want to substitute XXX and YYY for the correct VLAN and VSAN numbers, respectively.

    After you’ve enabled FCoE and mapped FCoE VSANs onto VLANs, then you are ready to create virtual Fibre Channel (vfc) interfaces. Each physical Nexus port that will carry FCoE traffic must have a corresponding vfc interface. Generally, you will want to create the vfc interface with the same number as the physical interface, although as far as I know you are not required to do so. It just makes management of the interfaces easier. The commands to create a vfc interface look like this:

    switch(config)# interface vfc ZZ
    switch(config-if)# bind interface ethernet 1/ZZ
    switch(config-if)# no shutdown
    switch(config-if)# exit

    At this point the vfc interface is created, but it won’t work yet; you’ll need to place it into an VSAN that is mapped to an FCoE enabled VLAN. If you don’t, the show interface vfc <number> command will report this (emphasis mine):

    vfc13 is down (VSAN not mapped to an FCoE enabled VLAN)

    As I mentioned earlier, if you haven’t mapped the FCoE VSAN onto a VLAN, you won’t be able to fix this problem. If you have mapped the FCoE VSAN onto a VLAN, then you only need to assign the vfc interface to the appropriate VSAN with these commands:

    switch(config)# vsan database
    switch(config-vsan-db)# vsan
    interface vfc
    switch(config-vsan-db)# exit

    At this point, the vfc interface will report up, and you should be able to see the host’s connection information with the show flogi database command.

    From this point—assuming that your storage is attached to a traditional Fibre Channel fabric, which is likely to be the case in the near future—you only need to create zones with the WWNs of the FCoE-attached hosts in order to grant them access to the storage.


    In my own experience, once FCoE was properly configured on the Nexus 5000 switch, then creating zones and zonesets on the Cisco MDS Fibre Channel switch and creating and masking LUNs on the Fibre Channel-attached storage is very straightforward. This, as has been stated on several previous occasions, is one of the strengths of FCoE: it’s compatibility with existing Fibre Channel installations is outstanding.

    Monday, April 25, 2011

    IOS Packet Capture and Auto Upgrade

    Finally there is a feature that was missing from IOS in the past. This is the ability to easily capture packets travelling through the router, and export the captured data to PCAP format so that you can view it with third party tools (like Wireshark). The packets can also be viewed locally on the router. The configuration example below shows how to enable packet capture (supported in IOS version 12.4(20)T):

    Cisco-Router# monitor capture buffer mycapturedata size 128 max-size 128 circular
    Cisco-Router# monitor capture point ip cef capturepoint1fastEthernet 1/1 both
    Cisco-Router# monitor cap point associate capturepoint1 mycapturedata

    !Start the capture

    Cisco-Router# monitor capture point start capturepoint1

    !Stop the capture

    Cisco-Router# monitor capture point stop capturepoint1

    The configuration above first creates a capture circular buffer (mycapturedata) and a capture interface point (capturepoint1) on physical interface FastEthernet 1/1. Then you need to associate the capture point and the capture buffer.

    Now, in order to view or export the captured data use the following commands:

    Cisco-Router# show monitor capture buffer mycapturedata dump
    Cisco-Router# monitor capture buffer mycapturedata export [location]

    IOS Auto Upgrade

    From IOS version 12.4(15)T, there is a new feature for automaticaly upgrading your Cisco IOS images either directly from Cisco (IDA Server – Intelligent Download Application) or from a local TFTP/FTP server, as shown below:


    The new auto upgrade feature provides also a “warm upgrade” option which decompresses the new image and transfers control to it using the reload warm command. To set up auto upgrade, use the following commands:

    Router# configure terminal
    Router(config)# autoupgrade disk-cleanup crashinfo
    Router(config)# autoupgrade ida url [enter the URL of the IDA Server]
    Router(config)# autoupgrade status [email address] [smtp-server]

    ! Now issue the interactive mode command to step you through the upgrade process

    Router# upgrade automatic

    Sunday, April 24, 2011

    Booting a Cisco router from a USB Flash Drive

    Cisco routers typically store a copy of the device’s operating system (Cisco IOS) in their flash memory, and load this operating system image into RAM during the boot-up process. The flash memory of Cisco routers is usually internal or can be a removable flash card on higher end routers.

    However, it may happen that for various reasons the operating system image may not be available, maybe due to file corruption, flash memory corruption, accidental deletion, etc. In this case the device does not have a valid image to load and therefore the router boots into ROM monitor mode (rommon). This mode gives a reduced set of commands that essentially allow the administrator to manually run the boot sequence.

    For such cases, and using commands in the ROMMON mode, the Cisco ISR routers have 1 or 2 USB ports that can be used to load the IOS image from a USB flash drive.

    How to Boot from USB

    The obvious prerequisite of this procedure is to have a valid IOS image, which is suitable for the device you want to put into operation, stored on a USB flash drive. Once we have this resource, we must enter into ROM Monitor mode (rommon). If the device did not have a valid IOS image in the internal flash memory, it will go directly in that mode. If not, we can force entry into rommon mode by interrupting the boot sequence using “Ctrl + Break”.

    From this point, we can see the rommon mode prompt:

    rommon 1>

    In this mode we can see the list of available commands using the question mark or help command:

    rommon 1>?
    or
    rommon 1> help

    Then we can check our image stored on USB flash drive:

    rommon 2> dir usbflash0:

    program load complete, entry point: 0x8000f000, size: 0x3d240
    Directory of usbflash0:
    2 …… 14871760-… ..- rw-ipbase c2800nm-mz.124-3.bin

    Note: The command is dir usbflashx: where x assumes a value of 0 or 1 depending on which USB port of the router you are using.

    Then run the command that orders the router to boot from the image stored on USB flash:

    rommon 3> boot usbflash0: c2800nm-ipbase-mz.124-3.bin
    program load complete, entry point: 0x8000f000, size: 0x3d240
    program load complete, entry point: 0x8000f000, size: 0xe2eb30
    Self decompressing the image:
    ################################################## ########################################
    ################################################## ############# [OK]

    Once the router has booted up, you can now work with the normal IOS command line interface. You can copy the image we have in our USB flash into the internal router’s flash memory:

    Router> enable
    Router # copy usbflash0: c2800nm-ipbase-mz.124-3.bin flash: c2800nm-ipbase-mz.124-3.bin


    From now on, the router will be booting up from the internal flash memory.

    Saturday, April 23, 2011

    Networking 101: Understanding the Internet Protocol

    Welcome back! This edition of Networking 101 will give you the IP knowledge required to understand routing issues. Most everything on the Internet uses IP, and unlike Ethernet, knowing this protocol is pivotal to understanding how networking works with regards to the big picture. In upcoming articles, Networking 101 will explore TCP and UDP, routing theories, and then delve into the specific routing protocols. It's going to be a wild ride.

    Internet Protocol (IP) sits directly on top of layer 2, and is responsible for getting datagrams to their destination. Originally defined in RFC 791, IP has changed and been clarified a few times since, but the fundamental design remains the same. The IP layer does not provide any type of flow-control or sequencing capabilities—that's left to the upper layers. We'll be using "datagram" to refer to an entire IP message, and "packet" to identify an individual IP packet.

    IP sends and receives packets to and from IP addresses, but doesn't promise reliable delivery. There is no concept of "retries" in the IP layer. For various reasons, packets may be lost, corrupted, duplicated, delivered out of order or otherwise delayed. IP is also responsible for dealing with IP options and giving feedback in the form of ICMP error and control messages, explained last week in our look at ICMP.

    The IP header, 20 bytes long, comes immediately after the layer 2 header (because IP is layer 3). The IP data portion holds everything else, including a TCP or UDP packet in its entirety, as shown in the table below. Also note that the IP header can exceed 20 bytes, if IP options are used.
    Ethernet Header
    IPv4 Header
    Data (TCP, etc)


    Ethernet Header
    IPv4 Header
    Data (TCP, etc)

    | Ethernet Header | IPv4 Header | Data (TCP, etc.,) |

    IP is pretty straightforward, in that IP's goal is simple: get a datagram to the destination, and don't worry about anything but sending it to the next hop router. In reality, IP is more complex; else the header wouldn't have so many fields. Sorry for this, but scrutinizing the IP header is important. The fields, starting at the top (first bit) are:
    • Version: The version of IP being used. As expected, an IPv4 packet will have this field set to "4."
    • Header Length: Specifies the length of the header in 4-byte multiples. Options are rarely used, so you can expect to see a value of 5 here, meaning the header is 20 bytes long.
    • Type of Service: Rarely used, but theoretically this field is designed to give routers hints about how they should queue the IP datagram. Mainly for quality of service purposes, hosts may choose to set various options, such as: low delay, high throughput, or high reliability. These options are ignored by most routers.
    • Total Length: Specifies the length of the entire IP packet, including header, in bytes. Because this is a 16-bit number, IP packet length is limited to 65KB. This number represents the IP packet in question, not the entire IP datagram.
    • IP Datagram ID: Sometimes called the Fragment Identifier, this is an identifier used to check what IP datagram a specific IP packet belongs to. This field is necessary if IP hopes to combine individual IP packets back into a single datagram.
    • Flags: The Don't Fragment (DF) bit lives here, and is used to instruct routers to not fragment an IP packet. More Fragments (MF) is also available.
    • Fragment Offset: The offset of this fragment in the original datagram, in 64-bit blocks.
    • TTL (Time to Live): The number of hops remaining in an IP packet's life before it is destroyed. The TTL is what keeps undeliverable IP packets from floating around on the Internet forever.
    • Protocol Type: Specifies the next protocol, i.e. the header that will be encountered in the IP packet's payload.
    • Header Checksum: A checksum of the header, but not the data.
    • IP Source: The IP address of the host that originally sent the IP packet.
    • IP Destination: The IP address of the host the IP packet is destined for.
    • IP Options: Things such as loose source routing can be added to the end of an IP header.
    When a router receives an IP packet it will first check the packet's destination. If the router has a route to the destination, it will decrement the TTL, recalculate the checksum and then ship the packet on its way. If something went awry, the corresponding ICMP error will be sent, and the packet will get discarded. In its simplest form, that's how IP operates: It just repeats those steps for every packet it encounters.

    IP fragmentation is really the key to IP functionality, and exploring fragmentation is also educational: It gives real meaning to those header fields. Not every network passing an IP packet around is capable of sending the same sizes of packets. Various layer 2 frame formats allow different amounts of data to be sent at once. The largest MTU (define) allowed is 65KB, and the smallest is 68 bytes. RFC 1122 says that all hosts must be able to reassemble datagrams that are up to 576 bytes, but should in fact be able to reassemble datagrams that are the size of the interface's MTU.

    When shipping an IP datagram over the Internet, you have no idea what the MTU will happen to be along every layer 2 link. Your ISP might be connected via Ethernet to a tier 1 ISP, but the remote site you're trying to access could be on an ISDN link. Therefore, your IP packets will have to be fragmented before the last hop. Fragmentation can happen many times, too. If we wanted to send a 2000 byte packet to a remote site connected via ISDN, we would originally fragment the packet to fit on our 1500 byte link. We will still send an IP packet larger than 576 bytes (ISDN's MTU), so the last router before the ISDN link will have to fragment it as well.

    Also recall that IP is not a reliable protocol, so if any IP fragment gets lost along the way, the entire datagram must be resent. IP has no way to request the missing portion, so when something bad happens the result can be a large increase in traffic because of the retransmissions that will likely happen. Sometimes congested routers will have to drop a packet, and if that packet happens to be part of a 65K datagram, then entire thing must be resent. The upper protocol, TCP or other, will normally know if an entire datagram is missing, and can request a retransmission. However, TCP cannot tell if a fragment is missing, since the IP datagram will be incomplete and never sent upstairs to TCP. If TCP never receives the packet, it will eventually be resent. It is clear that the loss of a small portion of a 65K packet doesn't help alleviate a congested link, but rather contributes to more traffic. UDP applications commonly don't exceed 576 bytes for the sending size, and this helps two things. First, there aren't many links with MTUs smaller than 576, so it is likely that the IP datagram will not be fragmented. Second, remember that 576 is the magical number for all end systems speaking IP: they all must be able to reassemble datagrams up to this size. Devices with limited memory may have trouble with anything larger, so this is actually worth considering.

    Let's pretend we're a host, and we'd like to send an IP datagram of 1550 bytes (1530 data + 20 header), but our MTU is 1500 bytes. We'll have to send two fragments, and the relevant IP headers will look like this:

    •  fragment 0, offset = 0, size = 1480, MF bit set.
    •  fragment 1, offset = 1480, size = 50

    The IP ID and IP addresses in the fragments are always the same as the original IP datagram, but the header checksum, offset, and length fields will definitely change. When the other end gets the first packet and sees that it is a fragment, it will wait to get the rest, reassemble them, and then pass them up the stack to the next protocol.

    After this is sent, we won't hear anything more about it, assuming the DF bit isn't set in the IP flags. But what happens if somewhere along the link the MTU is 400 bytes? Before the 1480 byte packet can be sent, the router on this link will fragment it too. Path MTU, discussed last week, is used to get around the problem of intermediate routers having to fragment packets. Fragmentation takes time and precious resources on routers. The main reason we want to avoid excessive fragmentation is simply because of the extra delay that's inevitably introduced.

    Reassembly is always done at the final destination, so intermediate routers don't need to store IP datagrams. This also means that IP packets can be routed independently, over different paths without cause for concern. This is an important concept to understand--it makes IP very versatile. No matter what order the receiver gets the packets, it will be able to reassemble them based on the offset field in the IP header.

    Speed Limit 100Gbps – Cisco CRS-3 and Altibox Powers World’s Fastest Computer Party

    By Stephen Liu

    What do you get when you combine 5000+ gamers, a 100GE uplink to the Internet, a lot of espresso machines, and no parents to tell them to shut down the noise or go to bed early? A whole lot of fun!

    The Gathering (“TG”), is Norway’s largest computer party and kicked off today for its 20th time since 1992. It’s grown so large now that it is held at one of the venues used for the 1994 Winter Olympics. TG continues to attract growing interest to the gaming, computer, and entertainment event, both nationally and internationally and is organized by the non-profit organization KANDU (Kreativ Aktiv Norsk DataUngdom/Creative Active Norwegian Computer Youth). This year it’s powered at record speed by a Cisco CRS-3 router connected to The Gathering’s Internet provider, Altibox at 100 Gbps, along with technical support provided by several of Cisco Norway’s engineers, Merete Asak and Bjornar Forthun.

    This isn’t the first time the CRS has played a key role in a Scandinavian gaming conference. The Swedes used our 40G technology in 2007 at their Dreamhack event as we discussed (and video here), but now this has raised the performance bar.

    Although they probably won’t be playing Cisco’s award winning myPlanNet game, they’ll still enjoy others such as StarCraft, Quake, and Heroes of Newerth. Participants also participate in creative competitions in programming, graphics, and music.

    As one would expect to befit a computer party, the engineers proudly posted their network implementation on the Internet and even gave us a great testimonial:


    This year our sponsors really have stepped up to the plate by delivering a 100 Gigabit/s internet access with a kickass network to make sure we can enjoy the amazing capacity.

    As part of our SP marketing team I’ve got to say I really like the phrase “Cisco - Builds Kickass Networks” but I don’t think it will get past our branding or legal departments, even if the CRS-3 is the only vendor currently to support a fully standards-compliant, single-flow 100GE link. If you are in Norway this week and you are really in the mood to play “Call of Duty” on a very low ping network stop by The Gathering for some CRS-3 enabled fast gaming action!

    Friday, April 22, 2011

    New Features in Junos OS Release 11.1 for M Series, MX Series, and T Series Routers


    These release notes accompany Release 11.1R1 of the Junos OS. They describe device  documentation and known problems with the software. Junos OS runs on all Juniper Networks M Series, MX Series, and T Series routing platforms, SRX Series Services Gateways, J Series Services Routers, EX Series Ethernet Switches, and the QFX Series.

    Release Notes JUNOS 11.1

    Former engineer who sued Cisco now faces criminal charges

    IDG News Service - A one-time Cisco engineer who had sued his former employer, alleging it monopolized the business of servicing and maintaining Cisco equipment, has been charged by U.S. authorities with hacking.

    Peter Alfred-Adekeye, who left Cisco in 2005 to form two networking support companies, has been charged with 97 counts of intentionally accessing a protected computer system without authorization for the purposes of commercial advantage, according to an arrest warrant. He faces 10 years in prison and a US$250,000 fine if convicted on the charges.

    Alfred-Adekeye was abruptly arrested by the Royal Canadian Mounted Police in Vancouver on May 20, 2010, while giving a deposition to Cisco attorneys in his civil case. A Nigerian citizen and a resident of Zurich, Alfred-Adekeye was only in town for a few days for the deposition, but the U.S. Department of Justice tipped off Canadian authorities, telling them to look for him at Vancouver's Wedgewood Hotel.

    The U.S. Department of Justice accused Alfred-Adekeye of using a Cisco employee's user ID and password to download software and access Cisco's restricted website, according to the Canadian arrest warrant issued in the case.

    The case is still under seal as U.S. authorities try to extradite Alfred-Adekeye. His arrest had gone unreported until this week, when Vancouver media reported his extradition hearing. Alfred-Adekeye's U.S. arrest warrant was issued May 19, 2010 -- the day after he began his deposition in Canada -- by U.S. Magistrate Judge Howard Loyd in the U.S. District Court for the Northern District of California, according to the Canadian warrant.

    The dapper Alfred-Adekeye has founded two nonprofit efforts aimed at fostering entrepreneurship -- Road to Entrepreneurial Leadership and The African Network -- and he's also the head of two startups. On one of his Web sites he describes himself as, "a successful British entrepreneur and innovator of royal African descent."

    One of Alfred-Adekeye's companies, Multiven, sued Cisco in December 2008, accusing the company of monopolizing the business of servicing and maintaining Cisco enterprise equipment. Cisco forced owners of gear such as routers, switches and firewalls to buy its SMARTnet service contracts in order to get regular software updates and bug fixes, Multiven said. By providing updates and bug fixes only to SMARTnet customers and not to third parties, Cisco prevented independent companies from servicing its equipment, Multiven alleged.

    The SMARTnet service is a hot-button issue with some customers, who feel that Cisco should provide basic bug fixes and software updates free of charge as Microsoft or Apple do.

    Neither Alfred-Adekeye nor Multiven could immediately be reached for comment Tuesday, but according to the Vancouver Sun, his attorney claimed in court Monday that Cisco and the U.S. Department of justice colluded to arrest him during his deposition.

    In an e-mail, a Cisco spokeswoman said Tuesday, "We strongly disagree with the majority of the content in [the Vancouver Sun] article," adding that "the extradition is a matter between the two governments."

    After Multiven sued Cisco, the networking giant filed countersuits against Multiven, Alfred-Adekeye, and Pingsta, another company founded by Alfred-Adekeye. Multiven provides service and support for networking gear from multiple vendors. Pingsta advertises itself as an online platform for companies to hire engineers and buy cloud-based network expertise on a pay-per-use basis.

    The companies settled their civil lawsuits in July 2010, a few months after Alfred-Adekeye's arrest. Both sides dropped their claims and the companies paid their own legal costs.

    Wednesday, April 20, 2011

    Configuring Calling Encryption Between Cisco IP Phones



    By Paul Smith

    This blog is one of four dealing with the encryption of various Cisco UC devices. This particular piece deals
    with setting up encrypted calls between phones on a cluster. The other blogs in this series are Configuring CUCM with Secure LDAP, Configuring Secure Hardware Conferencing, and Configuring a Secure Voice Gateway. The information in the LDAP article stands on its own, but the steps herein must be followed before one can configure any of the items in the last two pieces.

    The steps detailed in these blogs may not be the final word on all of the ins and outs of configuring the items. However, we discovered that there are very few, or no, articles dealing with these subjects written by someone who has actually performed the tasks. We, therefore, felt it would be a service to offer information about the steps that worked for us in our specific set of circumstances (particularly since we, apparently, fell into most of the traps).

    Again, the following is for phone encryption, but these steps must also be followed first if one plans on configuring secure connections to voice gateways or plans on configuring secure conferencing.

    1) Begin by ensuring that the Cisco Certificate Authority Proxy Function service is running on the Publisher, and that the CTL Provider service is running on every node that is also running the CallManager service.

    2) Cluster security must be set to “Mixed Mode”. The cluster has 2 security modes, mixed or non-secure. With mixed mode, phones that support encryption, and which are configured for it, will set up encrypted calls between them. Phones that are not set for encryption will work, but will not have encrypted sessions between them. If a phone set for encryption calls a phone not set for encryption, the call will be completed but will not be encrypted.

    3) To set the encryption level for the cluster, two “Hardware Security Keys” must first be obtained from Cisco (part number KEY-CCM-ADMIN-K9=). They look like typical USB memory sticks and they always come in pairs.
    4) In the Plug-ins area of the CUCM Administration page, download the Cisco CTL Client. Once it’s downloaded, double-click on the file to install it on a PC.

    5) Make sure that DNS is configured properly in the cluster and that the DNS name of the servers running the CallManager service are resolvable by the PC running the CTL Client.

    6) Run the client. During the process, a prompt will be displayed that states that one of the keys must be plugged into the computer. Once information has been copied to and from the key, a prompt will state that the first key must be removed and the other key must be plugged into the computer. If you have two USB ports on the computer DO NOT insert both keys at the same time. If, at any point, a password is requested for the key, the default is “Cisco123” (case sensitive). Note: If, at any time, another individual set a different password for the keys, do not guess what that password may be. After 15 wrong attempts at guessing the password, the key locks and nothing will unlock it (this is part of the reason the keys come in pairs). If both keys get locked, another pair of keys must be obtained from Cisco.

    7) The CTL Client program is easy to run, it’s mostly a “Fill in obvious information, click Next, click Next, click Finish” type of thing. However, one of the tasks that is accomplish by running the program is setting the security mode of the cluster to “Mixed”. This is the only way that Mixed-Mode security can be set. The other task that is accomplished by running the CTL Client is the creation of a CTL file that will be used by the phones for encryption.

    8) After the CTL Client program has been run, restart the CallManager service and the TFTP service on every server in the cluster that has the services running.

    9) The next thing that must happen is a security profile must be created for each model of phone that will support encrypted conversations. Go to System > Security Profile > Phone Security Profile and click “Add New”.

    10) In the “Phone Security Profile Type” drop-down, select the model number of the phone for which a profile needs to be created. Click “Next”.

    11) Select the protocol the phone will use (SCCP or SIP) and click “Next”.

    12) Give the profile a Name and a Description. The Name will appear on the “Device Security” drop-down when a phone is created. In the “Device Security Mode” drop-down, select “Authenticated” if there is a simple need to authenticate the phone as being a device that’s supposed to attach to the CUCM cluster. Select “Encrypted” if there is a desire to encrypt the RTP and signaling streams to be unrecognizable to anyone who attempts to record and decipher them. We did Encrypted.

    13) In the “Authentication Mode” drop-down, the possible selections are “By Authentication String”, “By Null String”, “By Existing Certificate (Precedence to LSC)”, and “By Existing Certificate (Precedence to MIC)”. This dictates how encrypted communication will happen between CUCM and the phone. Using the LSC is the most secure method (realize, however, that the LSC must be first downloaded to the phone using a less secure method which will be detailed below). Set the Authentication Mode and click “Save”.

    14) Repeat the above three steps to create a profile for every model phone for which encryption will be configured.

    15) The next step is the one that gets the LSC onto the phone. Add a new phone (or go to an existing phone). Initially, the phone will probably be added with no security. But in the area marked “Certificate Authority Proxy Function (CAPF) Information”, hit the drop-down for “Certificate Operation” and select “Install/Upgrade”. In the “Authentication Mode” drop-down, select “By Authentication String”. In the “Authentication String” field, either add a string of numbers, or click the “Generate String” button and allow one to be generated automatically (Note: This step is something that can be done using BAT, but it’s recommended to use the same string throughout). Make sure the “Operation Completes By” setting is for some time in the future. Click Save and Apply.

    16) Go to the physical phone itself. Hit the “Settings” button and navigate to the security configuration area (getting to this area on a phone is different from model to model, but the goal is to find the section that mentions the LSC). The status of the LSC should be “uninstalled”. Unlock the phone with a **# and a softkey should appear that reads “Update”. Pressing this softkey will make the phone prompt for an Authorization String. Use the string of numbers that was selected or generated on the phone configuration page. Press the “Submit” softkey.

    17) During the LSC download process, the phone should state that it’s updating. Once the download is complete, the phone may reset.  When the procedure is complete, the LSC will be listed as “Installed”.

    18) Once installation is accomplished, go back into the phone’s configuration page in CUCM. In the “Protocol Specific Information” area, go to the “Device Security Profile” drop-down and select the secure profile that was configured for that model phone in a previous step. Once this is selected, the “Authentication Mode” (which is grayed out) will change from “By Authentication String” to “By Existing Certificate (Precedence LSC)”. Click Save and Apply. The phone will reset by itself.

    19) This is the point where things could get dicey. The phone should come back and register with no problem. However, some phones will be stubborn. It’s sometimes the case that the process needs to be redone beginning with setting the phone’s security profile to non-secure, saving and applying, then setting it back to using the LSC. We haven’t had to delete the LSC and start over, but there are items on Cisco’s NetPro where people claimed they had to do so.

    20) If the phone registers it’s probably ready to go. However, to check that the phone has been correctly configured for encryption, make a call between configured phones. If, once the call is answered, a padlock icon shows up next to the caller ID, the call is being encrypted correctly.

    21) An important element to remember is the fact that there are some changes that might be made to a cluster which will cause encryption to begin to fail. If encryption has been working on the phones, and the phones suddenly drop their registration after some configuration changes, the suggested first step will be to do a bulk change in the phones to a non-secure profile, just to get them working again. Then, during a maintenance window, rerun the CTL Client. Then set the security profiles back to the ones the phones are supposed to have, set the “Certificate Operation” drop-down to “Install/Upgrade”, make sure the operation has a future date, and Save and Apply.

    Cisco’s Compact Series (C-Series) Switches

    Cisco introduced the Cisco® Catalyst® 3560-C and Catalyst® 2960-C Compact Series (C-Series) switches. With these switches, Cisco continues to deliver on its commitment to innovation in its core technologies.

    These C-Series switches are aimed at helping customers deliver network services in locations that pose unique wiring, space or power challenges which would otherwise require disruption of business operations.

    Another industry-first: Power over Ethernet (PoE) pass-through technology
    With Cisco’s industry-first Power over Ethernet (PoE) pass-through capability, the C-Series Switches eliminate the need for power outlets and dramatically reduce cabling complexities and overall infrastructure requirements.  PoE pass-through technology powers IP devices in locations without access to power outlets.  Cisco C-Series Switches can draw power from an upstream (PoE+/PoE-capable) switch or a router in the wiring closet, to power itself and to drive power downstream to the IP devices connected to it.
    Cisco EnergyWise gives the switches the capability to monitor, manage, and reduce energy consumption of the devices connected to the switch.  Devices can be turned off and powered down when they are not needed, allowing businesses to generate additional cost savings.

    Other key features of the C-Series Switches include:- Simple Setup and Unified Network Management > Including Cisco Catalyst Smart Operations for “zero touch” setup and quick troubleshooting and Cisco Auto SmartPorts for automatically configuring the switch based on type of devices that connect to it.- Unparalleled Security with Cisco Trustsec > For more info on Cisco Trustsec, please click here.
    - Dramatically reduced cabling costs and flexible device placement > The C-Series Switches do not require expensive individual cable drops and can be deployed up to 100m away from the wiring closet. The flexible device placement makes them particularly suited to non-traditional networking environments and their sleek, fanless design makes them a good fit for locations including check-out kiosks in retail stores.

    For more information visit -
    http://www.cisco.com/en/US/products/ps11527/Products_Sub_Category_Home.html

    Tuesday, April 19, 2011

    Mediatrace: A New Traceroute tool IOS 15.1(3)T

    The classic traceroute tool has become an essential tool for network engineers. Traceroute is able to discover layer-3 nodes (routers) along the path towards a destination. This information provides operators with visibility about the path towards a destination.

    However, there are limitations to traceroute such as issues with traceroute following the right path (as it’s IP source address might be different), no layer-2 (switches and bridges) discovery and really only a single piece of information is returned (IP address of the router).

    With mediatrace, which shares the IP header of the flow you would like to trace, you can have much better path congruency—and confidence in the discovery. The mediatrace will also not only discover the routers (as with traceroute), but also switches that are only doing layer 2 forwarding.

    Mediatrace does not need to be enabled on every hop. If it is not enabled on node, the mediatrace packet will simply be forwarded through that part of the network. This is exactly what would happen in the case of your traditional MPLS-VPN network.


    Figure 1. Mediatrace tracing a flow while the operator chillaxes

    Now for the best part! Mediatrace can dynamically engage the performance monitor feature we talked about a few weeks ago. This allows a dynamic surgical monitoring policy to be applied for the flow we are tracing that results in hop by hop performance measurements such as loss and jitter. As is the case with all mediatrace runs, the information is brought back into a single report where it can be quickly analyzed.


    Figure 2. Mediatrace integration with performance monitor

    Despite the name, mediatrace is not only for voice/video flows. It is able to trace any IP flow, and is even able to engage performance monitor to gather hop by hop TCP stats.

    Mediatrace is a new tool that cisco released in IOS 15.1(3)T  for the ISR platforms as part of the medianet program.  Over the course of 2011, this feature will proliferate across cisco’s enterprise line of routers and switches.

    Monday, April 18, 2011

    Understanding Cisco Traffic Storm Control

    By Pete Welcher

    This blog is a quick note about an easily misunderstood set of switch commands, Cisco Traffic Storm Control. The commands are very useful, and work. However, they do seem to be commonly misunderstood -- or else the documentation is wrong. Part of the confusion may be due to different behavior on the large switches compared to the smaller Cisco switches. I get asked about this a lot when teaching the Nexus classes. I've seen Networkers slides that seem to think storm control behaves differently.

    I'm going by the documentation here, I don't have easy access to a 6500 or Nexus 7000 for lab testing (most N7K's tend to be in production).

    The traffic storm control command(s) are still very useful for mitigating the effects of a Spanning Tree loop. My co-workers tell me that you do want hardware-based storm control, for example by the time a 6500 Sup 2 (MSFC2) notices it should be doing software-based storm control, it is already toast. Toast = CPU spun up, stops doing BPDU's, stops sending UDLD so peers errdisable connections, etc.

    Traffic storm control is most useful in Cisco 6500 and Nexus 7000 switches. The documentation for the two models matches. (One suspects the source code is rather similar too.)
    Where does the confusion arise?

    Well, the manual says "Traffic storm control (also called traffic suppression) allows you to monitor the levels of the incoming broadcast, multicast, and unicast traffic over a 1-second interval. During this interval, the traffic level, which is a percentage of the total available bandwidth of the port, is compared with the traffic storm control level that you configured. When the ingress traffic reaches the traffic storm control level that is configured on the port, traffic storm control drops the traffic until the interval ends."

    It goes on to specify the syntax, which is to configure an interface with:

    storm-control {broadcast | multicast | unicast} level percentage[.fraction]

    The standard example is:

    interface Ethernet1/1
        storm-control broadcast level 40
        storm-control multicast level 40
        storm-control unicast level 40

    The problem comes about in that people think they get different thresholds for each of the three types of traffic: broadcast, multicast, unicast. WRONG! First hint: the thresholds in the example are all 40.

    Now read the syntax introduction carefully: "Traffic storm control uses a bandwidth-based method to measure traffic. You set the percentage of total available bandwidth that the controlled traffic can use. Because packets do not arrive at uniform intervals, the 1-second interval can affect the behavior of traffic storm control."

     (I put the key words in bold characters.) Each time you enter a storm-control command, you are adding to the flavors or types of controlled traffic. The threshold is the same threshold, which is applied to the sum total of the controlled traffic.

    The manual goes on to provide examples of how this works. It starts with

    If you enable broadcast traffic storm control, and broadcast traffic exceeds the level within the 1-second interval, traffic storm control drops all broadcast traffic until the end of the interval.

    This is what we all would probably expect, either way we interpret the operation of storm control. The manual goes on with:

    If you enable broadcast and multicast traffic storm control, and the combined broadcast and multicast traffic exceeds the level within the 1-second interval, traffic storm control drops all broadcast and multicast traffic until the end of the interval.
    If you enable broadcast and multicast traffic storm control, and broadcast traffic exceeds the level within the 1-second interval, traffic storm control drops all broadcast and multicast traffic until the end of the interval.
    If you enable broadcast and multicast traffic storm control, and multicast traffic exceeds the level within the 1-second interval, traffic storm control drops all broadcast and multicast traffic until the end of the interval.

    This makes it fairly clear that the aggregate of all the controlled types is what is being measured against the threshold -- and that there is only one threshold.

    The Nexus 7000 command reference pretty much clarifies it: "Enter the storm-control level command to enable traffic storm control on the interface, configure the traffic storm-control level, and apply the traffic storm-control level to all traffic storm-control modes that are enabled on the interface. Only one suppression level is shared by all three suppression modes. For example, if you set the broadcast level to 30 and set the multicast level to 40, both levels are enabled and set to 40."

    Unfortunately, "both levels" sounds like two different levels, each set to 40 -- unfortunate wording. The other documentation and the command behavior makes much more sense if the one and only threshold level is being set to 40.

    The blog at http://blog.ipexpert.com/2010/03/15/old-ccie-myths-storm-control/ tackles testing this, albeit on a small Catalyst switch. The testing methodology at the end of the blog unfortunately turns only one of broadcast and multicast storm control on at a time, which seems to me to defeat the purpose. My tentative conclusion: it seems likely the behavior of small Catalyst switches may differ from that of the Catalyst 6500 and Nexus 7000. The small switch documentation is a bit ambiguous either way. 

    Happy First Birthday, CRS-3!

    Can you believe it? It’s been one year since we launched the Cisco CRS-3 Carrier Routing System! I’m very pleased that the CRS-3 adoption rate is four times faster than the original CRS-1 series. In just a year, 80 service provider customers in more than 30 countries are deploying the platform – a true testament to the scalability and sustainability of the architecture.

    Further, service provider customers across the world like AT&T, Comcast, Turkcell in the Middle East, Main One in West Africa, and Hong Kong Broadband in East Asia, among others, are unanimous about the CRS platform increasing the relevance of the network by enabling fixed-mobile convergence, value-added services and consumer broadband. We appreciate the vision and innovation demonstrated by our customers as they incorporate the CRS-3 platform into their next-generation networks.

    The strong market response to the CRS-3 validates our belief that this platform is the foundation for the next-generation Internet.   Unlike competitive offerings that require refreshers, upgrades or even full replacements within just a few years, the Cisco CRS platform is designed to seamlessly accommodate the extraordinary growth of video traffic, mobile devices and new online services through this decade and beyond, delivering unprecedented investment protection.

    While others in the industry make promises of 100G, we are shipping more capacity than all of our competition combined.  The CRS-3 and IOS XR engineering teams are bringing to market truly world class innovations in all aspects of design, development and delivery. I am very proud of the CRS development team.

    Sunday, April 17, 2011

    Cisco Completes Acquisition of newScale


    SAN JOSE, Calif. – Cisco today announced it has completed its acquisition of privately-held newScale Inc., a leading provider of software that delivers a service catalog and self-service portal for IT organizations to select and quickly deploy cloud services within their businesses. Based in San Mateo, Calif., newScale allows commercial and enterprise customers to initiate the provisioning of their own systems and infrastructure on an as-needed basis.
    Cisco’s cloud strategy is to harness the network as a platform for building and using clouds and cloud services. newScale complements and expands existing Cisco and partner software offerings in IT and cloud management and automation. Cisco remains committed to supporting flexibility and choice in management through a broad ecosystem of technology partners, and newScale delivers an additional option the company can provide to its customers.
    Financial terms of the transaction are undisclosed. With the close of the acquisition, the newScale team reports into Cisco’s Advanced Services organization.
    For ongoing news, please go to http://newsroom.cisco.com

    Networking 101: Understanding the Data Link Layer

    What's more important than IP and routing? Well, Layer 2 is much more important when it's broken. Many people don't have the Spanning Tree Protocol (STP) (define) knowledge necessary to implement a layer 2 network that's resilient. A switch going down shouldn't prevent anyone from having connectivity, excluding the hosts that are directly attached to it. Before we can dive into Spanning Tree, you must understand the inner workings of layer 2.

    Layer 2, the Data Link layer, is where Ethernet lives. We'll be talking about bridges, switching and VLANs with the goal of discovering how they interact in this part of Networking 101. You don't really need to study the internals of Ethernet to make a production network operate, so if you're inclined, do that on your own time.

    Ethernet switches, as they're called now, began life as a "bridge." Traditional bridges would read all Ethernet frames, and then forward them out every port, except the ones they came in on. They had the ability to allow redundancy via STP, and they also began learning which MAC addresses were on which port. At this point, a bridge then became a learning device, which means they would store a table of all MAC addresses seen on a port. When a frame needed to be sent, the bridge could look up the destination MAC address in the bridge table, and know which port is should be sent out. The ability to send data to only the correct host was a huge advance in switching because collisions became much less likely. If the destination MAC address wasn't found in the bridge table, the switch would simply flood it out all ports. That's the only way to find where a host actually lives for the first time, so as you can see, flooding is an important concept in switching. It turns out to be quite necessary in routing too.

    Important terminology in this layer includes:

    Unicast segmentation: Bridges can limit which hosts hear unicast frames (frames sent to only one MAC address). Hubs would simply forward everything to everyone, so this alone is a huge bandwidth-saver.

    Collision Domain : The segment over which collisions can occur. Collisions don't happen any more, since switches use cut-through forwarding and NICs are full-duplex. If you see collisions on a port, that means someone negotiated half-duplex accidentally, or something else is very wrong.

    Broadcast Domain : The segment over which broadcast frames are sent and can be heard.
    A few years later, the old store-and-forward method of bridge operation was modified. New switches started only looking at the destination MAC address of the frame, and then sending it instantly. Dubbed "cut-through forwarding," presumably because frames cut through the switch much more quickly and with less processing. This implies a few important things: A switch can't check the CRC to see if the packet was damaged, and that implies that collisions needed to be made impossible.
     
    Now, to address broadcast segmentation, VLANs were introduced. If you can't send a broadcast frame to another machine, they're not on your local network, and you will instead send the entire packet to a router for forwarding. That's what a Virtual LAN (VLAN) does, in essence: It makes more networks.

    On a switch, you can configure VLANs, and then assign a port to a VLAN. If host A is in VLAN 1, it can't talk to anyone in VLAN 2, just as if they lived on totally disconnected devices. Well, almost; if the bridge table is flooded and the switch is having trouble keeping up, all data will be flooded out every port. This has to happen in order for communication to continue in these situations. This needs to be pointed out because many people believe VLANs are a security mechanism. They are not even close. Anyone with half a clue about networks (or with the right cracking tool in their arsenal) can quickly overcome the VLAN broadcast segmentation. In fact, a switch will basically turn into a hub when it floods frames, spewing everyone's data to everyone else.

    If you can't ARP for a machine, you have to use a router, as we already know. But does that mean you have to physically connect wires from a router into each VLAN? Not anymore, we have layer 3 switches now! Imagine for an instance, if you will, a switch that contains 48 ports. It also has VLAN 1 and VLAN 2, and ports 1-24 are in VLAN 1, while ports 25-48 are part of VLAN 2. To route between the two VLANs, you have basically three options. First, you can connect a port in each VLAN to a router, and assign the hosts the correct default route. In the new-fangled world of today, you can also simply bring up two virtual interfaces in each VLAN. In Cisco-land, the router interfaces would be called vlan1 and vlan2. They get IP addresses, and the hosts use the router interface as their router.

    The third way brings us to the final topic of our layer 2 overview. If you have multiple switches that need to contain the same VLANs, you can connect them together so that VLAN 1 on switch A is the same as VLAN 1 on switch B. This is accomplished with 802.1q, which will tag the packets as they leave the first switch with a VLAN identifier. Cisco calls these links "trunk ports," and you can have as many VLANs on them as the switch allows (currently 4096 on most hardware). So, the third and final way to route between VLANs is to connect a trunk to a router, and bring up the appropriate interfaces for each VLAN. The hosts on VLAN 1, on both switch A and B will have access to the router interface (which happens to be on another device) since they are all "trunked" together and share a broadcast domain.

    We've saved you from the standard "this is layer 2, memorize the Ethernet header" teaching method. To become a true guru you must know it, but to be a useful operator, (something the cert classes don't teach you) simply understand how it all works. Join us next time for an exploration of the most interesting protocol in the world, Spanning Tree.

    My Blog List

    Networking Domain Jobs