Sunday, March 31, 2013

Realtime Chat between Cisco Routers


You might probably know that it's possible to send messages from one vty line to another on a single Cisco router.

R1#send ?
  *        All tty lines
  <0-17>   Send a message to a specific line
  aux      Auxiliary line
  console  Primary terminal line
  log      Logging destinations
  qdm      Send a message to QDM client
  vty      Virtual terminal
  xsm      Send a message to XSM client



R1#send 1
Enter message, end with CTRL/Z; abort with CTRL/C:
Hi
^Z
Send message? [confirm]

R1#

***
***
*** Message from tty0 to tty1:
***
Hi


There is a way to send automatically some custom-made udp packets from a Cisco router to a specific destination, in order to emulate the heartbeat mechanism of SixXS. Tcl seemed like a nice option, but as far as i know its implementation in Cisco IOS doesn't support extensions (Tcl doesn't have a builtin command for udp channels, so we must use an extension to enable it).

Asynchronous Serial Traffic Over User Datagram Protocol or UDPTN (UDP Telnet) is an IOS feature that provides the ability to encapsulate asynchronous data into UDP packets, and then unreliably transmit this data without needing to establish a connection with a receiving device. UDPTN is similar to Telnet in that both are used to transmit data, but UDPTN is unique in that it does not require that a connection be established with a receiving device.

Its usage is quite simple. You just have to enable udptn as an output transport under your vtys and then open a connection to the remote end.

R1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R1(config)#line vty 0 15
R1(config-line)#transport output ssh udptn
R1(config-line)#^Z
R1#

R2#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R2(config)#line vty 0 15
R2(config-line)#transport output ssh udptn
R2(config-line)#^Z
R2#


You have various options regarding the role of each device, but usually one end is transmitting and the other end is receiving. If you need 2-way communication, then you need to enable both directions. You can use any port above 1024 or just the default 57.

R1#udptn 1.1.1.2 3740 /transmit /receive
Trying 1.1.1.2, 3740 ... Open

R2#udptn 1.1.1.1 3740 /transmit /receive
Trying 1.1.1.1, 3740 ... Open


It becomes more interesting if you send packets to a multicast/broadcast address, so everyone having an open connection there will see the data.

There are some annoying things, like that you can't see locally the entered chars, or that you get chars on top of the previous chars (you can use spaces in order to clear the line), but you can't expect the full thing.

R2#udptn 1.1.1.1 3740 /transmit /receive
Trying 1.1.1.1, 3740 ... Open
How are you doing today?     ! This was typed on R1

R1#udptn 1.1.1.2 3740 /transmit /receive
Trying 1.1.1.2, 3740 ... Open
Fine, thanks                 ! This was typed on R2


Voila! You just made it possible to have a chat with your friend at a remote Cisco router! If you want to stop the session, you can use Ctrl-Shift-6 + x and then enter the "disconnect" command.

There are 2 terminal options that can be configured under source vtys and can change the behavior of text output:

dispatch-timeout 10000 : This one makes the packets be transmitted every 10 secs
dispatch-character 13 : This one causes the current number of typed chars to be sent after you press Enter (ASCII 13). By default each char is sent immediately.

Note : Because of its ability to send raw UDP datagrams that might conflict with other protocols, UDPTN has an implicit access list that only allows UDPTN connections to UDP port 57 (default) or UDP ports greater than 1024.

If only now i could find a way to send such messages automatically, i would probably solve my initial issue. EEM doesn't provide a mechanism to feed chars into a remote session and the Tcl "typeahead/exec" solution makes the process get stuck (and i can't find a way to clear it). Any idea how to send Ctrl-Shift-6 + x?
 

Saturday, March 30, 2013

First look: Juniper's core SDN switch

Juniper does not have a migration plan for current EX8200 users to EX9200 40/100G Ethernet/SDN switch

Earlier than expected, Juniper this week unveiled the EX9200 programmable core switch for 10G, 40G and 100Gbps software-defined networks.

The EX9200 is a complete core replacement for current generation EX8200 customers looking for 40/100G Ethernet and/or SDN programmability. None of the EX8200 line cards are forward compatible with the EX9200, Juniper says.

Juniper switches

"(The upgrade) sounds drastic, but the EX8200 is successful in lower density markets," says Dhritiman Dasgupta, Juniper senior director or product marketing and strategy. "We did not have a solution for very high speed densities."

Juniper does not yet have an EX8200-to-EX9200 migration or trade-in program in place for customers but is working on one, Dasgupta says.

The EX9200 is based on Juniper's MX edge router rather than the EX switch. It runs that same 3+-year-old One/Trio ASIC as the MX and form factors are identical, but chassis, line cards, switch fabric, routing engines and software are not, Dasgupta says.

Only power supplies, fan trays, air filters and power cables are interchangeable components between the EX9200 and MX router, he says. New guts were necessary to bring the MX-based EX9200 down to a switch price point, he says - functionally-enriched routers cost much more than switches.

Juniper went with the MX technology instead of the EX switch platform for the EX9200 due to the programmability of the Trio ASIC, Dasgupta says.

"We would not have been able to deliver that" with the EX platform, he says.

He also claims that the EX8200 was never designed for 40/100G Ethernet despite the multi-terabit switch capacity numbers Juniper touted for the box. Nonetheless, customers may have been sold on the EX8200 with the understanding it was "100G Ready" -- Juniper resellers and channel partners are
being advised to reassure those customers that Juniper will continue to invest in and support the
EX8200, and aid in the migration to the EX9200, sources familiar with the program have said.

Dasgupta says the EX8200 will receive mostly software enhancements from here on in.

The EX9200 has also cast future development of the QFabric Interconnect, the core of Juniper's QFabric single-tier data center fabric switching system, into question. Sources say QFabric will remain Juniper's lead platform for single-tier fabrics but for "10G-only" implementations, despite the platform's support for currently shipping 40G interfaces.

The majority of QFabric top-of-rack switches - called "nodes" in the QFabric architecture - already have 40G uplinks, Dasgupta says.

QFabric will be positioned as the alternative to Cisco's Nexus 7000 with "F" series cards, which are optimized for high-density, low-power, shallow buffers and limited features, sources say. EX9200 will be positioned against the Nexus 7000 "M" series in the data center, and against Cisco's Catalyst 6500E and HP's 7500/10500/12500 switches in the campus core with large physical or logical scale, and/or 40/100G requirements, they say.

It is unclear what Juniper will position against Cisco's new Nexus 6000 40G data center switch but it's conceivable the EX9200, perhaps in combination with QFabric top-of-rack nodes, will take that on as well.

Dasgupta says Juniper is still committed to continued development of single-tier fabric architectures and that there's demand for the predictable low latency switching it's designed for. He says Juniper is signing up a couple of "full" QFabric - node and interconnect - customers per quarter.

"In 10 years, every data center will need an architecture like QFabric," Dasgupta says. "You will be able to sign your application performance SLAs in blood" with such an architecture.

He would not comment on whether 100G is planned for QFabric, but it appears it is still unclear if the QFabric Interconnect will attain it.

"I can't comment on when 100G [might emerge] for QFabric at the edge," Dasgupta said, apparently referring to the top-of-rack nodes. Juniper declined to clarify Dasgupta's remark.

If only QFabric top-of-rack nodes attain 100G, they will connect into an EX9200 core for two-tier data center networking, sources say. Juniper is positioning the EX9200 as the optimal platform for that, they say, in combination with the EX series and/or QFabric 1G/10G top-of-rack switches.
In addition to the EX9200, Juniper this week also unveiled a network management application for controlling mixed environments of wired and wireless LANs in campuses and data centers. Junos Space Network Director includes the RingMaster management application Juniper obtained from its acquisition of Trapeze Networks in 2010.

Network Director will work with a new virtualized wireless controller Juniper also rolled out this week. The JunosV Wireless LAN Controller is software that runs as a VM on x86 servers that's designed to abstract the underlying hardware appliance that provides wireless control services.

The JunosV WLC will be embedded in the EX9200 in about a year, Dasgupta says. From the core, it will be able to manage thousands of access points and tens of thousands of WLAN users, with high-availability, he says.

Thursday, March 28, 2013

Understanding Spanning Tree Protocol



Spanning-tree Protocols
802.1d (Standard Spanning-tree)
So the entire goal of spanning-tree is to create a loop free layer 2 domain. There is no TTL in a layer 2 frame so if you don’t have spanning-tree, a frame can loop forever. So the original 802.1d standard set out to fix this. There are a few main pieces to the 802.1d process. They are…

1. Elect a root bridge.
This bridge is the ‘root’ of the spanning-tree. In order to elect a root bridge, all of the switches send out BPDU (Bridge Protocol Data Units). The BPDU has a bridge priority in it which the switches use to determine which switch should be the root. The lowest ID wins. The original standard specified a bridge ID as…

image
 
As time progressed there became a need to create multiple spanning-trees for multiple VLANs (we’ll get to that later). So, the bridge ID format had to be changed. What they came up with was..
 
image
 
So, now you know why you need to have a bridge priority that’s in multiples of 4096 (if you don’t.. A total of 4 bits gives you a total of 16 values, 16 * 4096 gives you 65,536 which is the old bridge priority max value – 1).
 
So at this point, we have a mess of switches swarming around with BPDUs. If a switch receives a BPDU with a lower bridge priority it knows that it isn’t the root. At that point, it stops sending out it’s own bridge ID and starts sending out BPDUs with the better (lower) priority that it heard of. In the end, all of the switches will be forwarding BPDUs with the lowest bridge ID. At that point, the switch originating the best(lowest) bridge ID knows that it is the root bridge.
 
2. Each switch selects a root portSo now that we know which switch is the root, every non-root switch needs to select it’s root port. That is, the port with the lowest cost to the root switch. To determine this, the root port sends ‘Hellos’ out of all of it’s port every 2 seconds. When a non-root switch receives the hello, it does a couple of things. First, it reads the ‘cost’ from the hello message and updates it by adding the port cost. So if a hello came in a fast Ethernet port with a cost of 4, the switch would add 19 to it giving you a new cost of 23. After all of the hellos are sent, the switch picks it’s root port by selecting the port which had the lowest calculated cost. Now, a bit about port costs. See the table below…

Interface Speed
Original IEEE Port CostNew IEEE port Cost
10 Mbps100100
100 Mbps1019
1000 Mbps14
10000 Mbps12

So as you can see, with the increase in speed came a upgrade to the port costs. Now that we have 40 gig interfaces I’m wondering if they will redo that again. At any rate, if there is a tie, say two ports that have a calculated cost of 23. The switch breaks the tie in the following fashion..

1. Pick the lowest bridge ID of switch that sent the hellos
2. Pick the lowest port priority of the switch that sent the hellos
3. Use the lowest port number of the switch that sent the hellos
(We’ll talk about port priorities in a bit) Now that we have a root port we can move onto step 3.

3. Pick a designated portThis part is pretty easy. Basically, each segment can only have one designated port. The switch that forwards the lowest cost hello onto a particular segment becomes the designated switch and the port that it uses to do that is the designated port. So, that would mean that each port on the root bridge would be a designated port. Then, ports that are neither root ports or designated ports (non-designated ports) go into blocking state. If a tie occurs, the same tiebreaker process occurs as in step 2.

At this point, we have a fully converged spanning-tree!

Normal OperationUnder normal operation the root sends hellos out of all it’s active ports. Each connected switch receives the hellos on their root ports, updates it, and forwards it out of it’s designated port (if it has one). Blocked ports receive the hellos, but never forward them.

Topology Changes
When a switch notices a topology change, it’s responsible for telling all other connected switches about the change. The most effective way to do this, is to tell the root switch so that it can tell all of the other switches. When a switch notices a topology change, it sends a TCN (topology change notification) out it’s root port. The switch will send the TCN every hello time until the upstream switch acknowledges it. The upstream switch acknowledges by sending a hello with a TCA (topology change acknowledgement). This process continues until the root becomes notified. The root will then set the TC flag on it’s hellos. When switches in the tree see the TC set in the hello from the root, they know that there has been a topology change and that they need to age out their CAM tables. Switches aging out their CAM tables is an important part of a topology change and reconvergence.

802.1D Port States

Blocking – The port is blocking all traffic with the exception of receiving STP BPDUs. The port will not forward any frames in this state.
Listening – Same as blocking but will now begin to send BPDUs.
Learning – The switch will begin to learn MAC information in this state.
Forwarding – Normal full up and up port state. Forwarding normal traffic.

TimingThere are a couple of main timers in the STP protocol. These are..
Forward Delay Timer – Default of 15 seconds
Hello – Default of 2 seconds
MaxAge – Default of 20 seconds

Spanning-Tree enhancements (Cisco Proprietary)
PortFast – Immediately puts a port into forwarding mode. Essentially disables the STP process. Should only be used for connecting to end hosts.
UplinkFast – Should be used on access layer switches connecting to distribution. Used to fail over the root port in the case of the primary root port failing. CAM entries are timed out by the access layer generating multicast frames with attached devices MACs as the source for the frames. This is different than the normal TCN process as described earlier. UplinkFast also causes the switch to increase the root priority to 49152 and set all of the ports costs to 3000.
BackboneFast – Used to detect indirect STP failures. This way the switch doesn’t have to wait MaxAge to reconverge. The feature needs to be configured on all switches in order for it to work. The switch queries it’s upstream switches when it sops receiving hellos with a RLQ (Root Link Query). If the upstream switch had a failure it can reply to the local switch so that it can converge to another port without waiting for the MaxAge to expire.

802.1w (Rapid Spanning-Tree)
Rapid spanning-tree takes 802.1d and makes it faster. In addition, they take some of the Cisco proprietary features and standardize them. Here are some of the notable changes that 802.1w makes.

-Switches only wait to miss 3 hellos on their root port prior to reconverging. This number in 802.1d was 10 (MaxAge, or 10 times hello).
-Fewer port states. 802.1w takes the number of port states from 5 (Im counting disabled) down to 3.

The new states are discarding, learning, and forwarding.
-Concept of a backup DP when a switch has multiple ports connected to the same segment.
-Standardization of the Cisco proprietary PortFast, UplinkFast, and BackboneFast.

802.1w Link TypesPoint to Point – Connects a switch to another switch in full duplex mode.
Shared – Connects a switch to a hub using half duplex
Edge – A user access port

802.1w Port roles
Root Port – The same as in 802.1d
Designated Port – The same as in 802.1d
Alternate Port – Same as the uplink fast feature, backup RP connection
Backup Port – Alternate DP port, can take over if the existing DP fails

802.1s (Multiple Spanning-Tree)
Multiple spanning-tree (MST) lets you map VLANs into a particular spanning tree. These VLANs are then considered to be part of the same MST region. MST uses the same features as RSTP for convergence, so if you are running MST, you are by default also running RSTP. Much like any other ‘group’ technology, there are several parameters that must be met before switches/vlans can become part of the same region.

-MST must be globally enabled
-The MST region name must be configured (and the same on each switch)
-Define the MST revision number (and make it the same on each switch)
-Map the same VLANs into each region (or instance)

MST can con-exist with other switches that don’t talk MST. In this case, the entire MST region appears to be a single switch to the other ‘external’ spanning-tree. The spanning-tree that connects the region to the ‘outside’ is considered to be the IST, or Internal Spanning Tree.

Spanning-tree Protection
There are several ‘protection’ mechanisms available that can be implemented in conjunction with spanning-tree to protect the spanning-tree from failure or loops.

BPDU Guard – Should be enabled on all ports that will never connect to anything but an end user port. The configuration will err-disable a port if a BPDU is received on that port. To recover from this condition the port must be shut/no shut.

Root Guard – Protects the switch from choosing the wrong RP. If a superior BPDU is heard on this port the port is placed into root-inconsistent state until the BPDUs are no longer heard.

UDLD – Unidirectional link detection is used to detect when one side (transmit or receive) is lost. States like this can cause loops and loss of connectivity. UDLD functions in two modes, aggressive and normal. Normal mode uses layer 2 messaging to determine if a switches transmission capabilities have failed. If this is detected, the switch with the failed transmit side goes into err-disable. In aggressive mode the switch tries to reconnect with the other side 8 times. If this fails, both sides go into err-disable.

Loop Guard – When a port configured with loop guard stops hearing BPDUs it goes into loop-inconsistent state rather than transitioning into forwarding.
 

Wednesday, March 27, 2013

A Next-generation Enterprise WAN architecture

Courtesy - Network World
 
 
Enterprise WANs have changed very little in the last 15 or so years. While price/bit for the Enterprise WAN has improved somewhat over that time, it hasn’t increased with Moore’s Law as has computing, storage, Internet access, LAN switching … and pretty much everything else associated with IT. And while Internet connections have seen Moore’s Law bring about quantum improvements in price/bit, the unaided public Internet is still not reliable enough to deliver business-class quality of service and performance predictability, which is why the overwhelming majority of Enterprise WANs are based not on IPSec VPNS over the Internet but instead on private MPLS services from telcos like AT&T, Verizon and BT.
Starting now and over the next few years, however, the Enterprise WAN will have a revolution of its own. Closely correlated with the rise of cloud computing, this Next-generation Enterprise WAN (NEW, for short) architecture will help organizations move to cloud computing more quickly, but will be applicable even for those enterprises that don’t plan to use public cloud services for many years, if ever.
This column will cover this NEW architecture: why it will happen, what it will look like, the key technologies enabling it, the implications for various applications, including next-generation applications, the impact on enterprises’ relationships with their service providers, how it enables a smooth migration to leveraging cloud services, and much more.
This Next-generation Enterprise WAN architecture will happen precisely because private WANs need a revolution in price/performance to cost effectively support the next wave of applications and the move on the computing side of the house towards cloud computing. Also driving this trend it the fact that the plain-old-public-Internet is not reliable enough for the corporate WAN today, and that it won’t deliver the necessary reliability or performance predictability by itself that most enterprises demand from their WAN and applications.
One undeniable trend of the last several years has been to favor data center consolidation. This began because of the benefits on the computing and OpEx, rather than the network. Indeed, this trend put even more pressure on the WAN (and on the data center LAN, but that issue is beyond our scope). Server virtualization technology and WAN Optimization technology have further enabled and accelerated the data center consolidation trend. In fact, these are two of the technologies that are key to the NEW architecture.
Three additional technologies also play a critical role. Distributed replicated file service, such as DFS Replication from Microsoft, and similar file synchronization services (e.g. DropBox and Box.net in the public cloud world) have been around for some time, but have come into their own more recently as network bandwidth has become more available and more affordable. One might argue that this is more of a computing/storage/application technology than a network technology; there is some truth to that. Nevertheless, we include it here as one of the key enablers of the next-gen WAN.
Colocation (colo) facilities, and in particular carrier-neutral colo facilities, are our fourth component. While colos have been around for a while, and many IT folks are familiar with them for public-facing websites and perhaps know them as the location for many public cloud services, the nearly infinite amount of diverse, very inexpensive bandwidth available at colos will make them a critical component of this NEW architecture.
Our final technology is the newest one: WAN Virtualization. WAN Virtualization does for the WAN what RAID did for storage, enabling organizations to combine diverse sources of bandwidth and build WANs that have 20 to 100 times the bandwidth, with monthly WAN costs reduced by 40% to 80% or even more, and more reliability and performance predictability than any single-vendor MPLS network. WAN Virtualization is the catalyst of our NEW architecture.
With the combination of these technologies, Enterprise WANs will have far lower monthly telecom costs, far higher bandwidth, and will be more reliable. If that troika alone isn’t enough, this NEW architecture also delivers lower OpEx (people) costs, significantly better application performance and, just as importantly, better application performance predictability. It will enable next-generation applications, e.g. HD videoconferencing.
This architecture also enables benefits and changes beyond those to the WAN itself. It can enable further server consolidation, up to the elimination of all branch-based servers if desired. It will facilitate the centralization of network and IT complexity, e.g. for Internet access and remote site backup.
It will allow enterprises to leverage cloud computing – public, private or hybrid – in an incremental, secure and reliable way. Enterprise WAN managers can prepare and enable their WAN for the move to private or public cloud computing, at whatever pace the computing side of the organizations wants to go, without sacrificing the network reliability and network security they have today.
By doing all of these things it helps lower overall IT CapEx and OpEx, not just networking OpEx. Wide Area Network design is, for the first time in a long time, strategic.
One of the most beautiful points is that most of this next-generation network upgrade pays for itself out of the WAN OpEx budget. It also provides a long-term way to leverage Internet economics and Moore’s Law, giving enterprises a way to cost effectively scale their WANs and leverage new WAN technologies, even those that are consumer-oriented, as they appear. It gives enterprises leverage with their telecom service providers for the first time.
Just as cloud computing is making now an interesting and exciting time to be on the computing side of IT, the confluence of these five technologies - server virtualization, WAN Optimization, distributed/replicated/synchronized file services, colocation and WAN Virtualization – is making this an interesting and exciting time for the Enterprise WAN.

Tuesday, March 26, 2013

Juniper Breaking New Ground With the World's Smallest Supercore

More Details -
http://www.juniper.net/us/en/dm/ptx-3000/?utm_source=promo&utm_medium=home_page&utm_content=carousel&utm_campaign=ptx3000

Video
http://www.youtube.com/embed/HNXYUJsNz3A?rel=0&autohide=1&autoplay=1


Breaking New Ground With the World's Smallest Supercore

Space and energy are two key challenges faced by Service Providers as they converge and optimize their business. The PTX3000 Packet Transport Router breaks new ground with the world’s smallest supercore, with capacity that is designed to rapidly scale over time up to 24 terabits per second. With this innovation, Service Providers can now install a converged supercore in virtually any space and energy constrained environment. Plus, it is designed to rapidly scale with minimal power consumption.

Meet the PTX 3000

Meet the latest innovation in the PTX line.



The PTX3000 breaks the mold for core routers with game-changing size, performance and efficiency, addressing practical barriers that Service Providers face in upgrading today’s networks. With the PTX3000, Juniper has redefined the upgrade so that one technician can manually hand-carry and install a PTX3000 within a matter of minutes versus hours.

When compared to competing core routing platforms, the PTX3000 router delivers:

  • The industry’s lowest power consumption – up to 1.2Tbps per Kilowatt
  • The industry’s most efficient system design - generating roughly 10,000 BTU of heat for a fully loaded system
  • Leading capacity-to-space ration at over 0.533 gigabits per cubic inch
  • Wire rate forwarding for even the smallest packet sizes
  • The industry’s lowest latency at down to 5 micro-seconds
  • A precision feature set that ensures rapid time-to-deployment, and high reliability
 

F5 LTM VE – Configuring iRules (CLI)



CLI (in my opinion) is the fastest and easiest way to configure a lot of these items if you are comfortable with it. Lets look at configuring iRules.

Create the HTTP_MOBILE_POOLcreate ltm pool HTTP_MOBILE_POOL load-balancing-mode round-robin members add {192.168.1.41:81 192.168.1.42:81 192.168.1.43:81}

Create the iRule DETECT_DEVICE_TYPEEnter this command from tmsh to enter the editor…
edit ltm rule DETECT_DEVICE_TYPE
Then you should see something like this…


image
Build your iRule within the curly brackets of the ‘modify’ statement. When you are done it should look like…
image

It’s VI so do your write quit to save. When you get back to tmsh it will prompt you again to make sure you want to save it. When you try to do so it will also hit you with any syntax errors you have and make you fix them before saving.

Modify the Virtual Server to specify the iRulemodify ltm virtual HTTP_TEST rules {DETECT_DEVICE_TYPE}

Add HTTP monitors to both pools (Just for fun)modify ltm pool HTTP_POOL monitor http
modify ltm pool HTTP_MOBILE_POOL monitor http


 

Monday, March 25, 2013

Where do Cisco's network security plans go from here?

 
 
Despite its leadership position in most enterprise security product areas, Cisco faces a number of technological and competitive challenges to stay out in front.
 
For example, the overarching security plan Cisco outlined two years ago known as SecureX remains very much a work in progress. The basic idea behind SecureX is to give customers a broad view of what computer and mobile device users are doing on the network.
 
The SecureX architecture has been called over-complicated and perhaps too dependent on having a Cisco-based infrastructure, but the basic idea is that by collecting real-time information about the individual's network usage and applications, device make, location and other variables, appropriate security policies can be established for network authorization.
 
Originally spearheading SecureX was Tom Gillis, a former vice president and general manager for the Cisco technology group who departed in 2011 and is now CEO of startup Bracket Computing. But Cisco says the importance of the SecureX initiative remains the same.
 
Support for SecureX has come first in the Cisco ASA CX Context-Aware Security Next-Generation Firewall. Dave Frampton, vice president of security at Cisco, says it's now on to "the next phase of SecureX," which will be the "routing and switching infrastructure," though he offers no specific time frame for completion.
 
Frampton emphasizes that "SecureX conveys our entire approach to security." He says about 3,000 Cisco customers have adopted SecureX security components, which include the older Cisco Identity Services Engine and TrustSec tagging methodology. He says tens of thousands more are indicating a high level of interest in SecureX.
 
Beyond SecureX, Cisco faces other challenges from analysts and enterprise IT security managers alike.
 
Gartner -- the consultancy whose thumbs-up or thumbs-down opinions on information technology are often a strong influence on enterprise managers and vendors -- has been critical of Cisco, especially in terms of its firewalls. For example, Gartner says that so-called next-generation firewalls (NGFW) that are application-aware rather than simply port-based are the direction that firewalls should be going. So while lavishing praise on other Cisco competitors -- Palo Alto Networks for its NGFW and Check Point Software Technologies for its array of firewalls and their management for complex environments, putting these two vendors in the Gartner firewall "leaders" category -- Gartner's report calls Cisco merely a "challenger."
 
While giving Cisco kudos for having a good support network and reputation analysis capabilities for its firewall customers, Gartner indicates that Cisco at this time does not seem to be displacing Palo Alto or Check Point on "vision or feature" and Cisco "does not effectively compete in the NGFW field that is visible to Gartner."
 
According to Cisco spokesman David Oro, "Cisco customers would say differently." He notes that the Cisco ASA CX firewall only shipped last July, and it would only be fair to give it time in the market. He says Cisco consider Gartner's research in this case "outdated," perhaps because it takes considerable time to put together this kind of lengthy Gartner report.
 
But Gartner says it sees Cisco winning most procurements through sales/channel execution or "aggressive discounting for large Cisco networks where firewall features are not highly weighted evaluation criteria (that is, as part of a solution sell in which security is one component)."
 
Gartner also notes that Gartner clients often find Cisco's security strategy, nomenclature and product descriptions "confusing." Gartner cites by way of example that Cisco uses the terms "context-aware" and "CX" rather than "application control" or "NGFW," and says Gartner clientele will out of confusion exclude Cisco in comparing its offerings to competitors' offerings.
 
Terms like "SecureX" and Cisco's marketing campaign "Internet of Everything," referring to how many devices are coming online, are confusing, says Erik Devine, information security manager at Riverside HealthCare, based in Kankakee, Ill. Devine says he has huge respect for Cisco as a network provider but simply "doesn't believe they're a strong security firm." He says that, like Juniper, Cisco should "stick to switching and routing."
 
Devine, who not only directs security but also networking decisions that include wireless and mobility for the healthcare organization, chose to migrate away from what was a Cisco-based network to an HP-based one, in part because licensing proved more attractive. In the course of that change, Riverside also moved away from Cisco-based ASA firewall modules. Instead, Riverside went with a variety of Fortinet firewall, SSL/VPN, encryption and messaging protection gateways that include wireless control for the core network.
 
Though he did look at Palo Alto and Cisco gear as part of the evaluation process, in the end Devine felt the Fortinet firewalls had sufficient application-level control for what the healthcare organization needed and were technically sound and cost-effective. In his own experience over the decades, Devine says he's found Cisco's licensing models to be overly complicated and expensive.
 
Palo Alto Networks, which Gartner considers the front-runner firewall maker technically in application-aware capability (though perhaps a bit pricey), says it sees Cisco as a worthy competitor.
 
"They are an impressive company. They have tremendous presence in the customer base," says Chris King, director of product marketing at Palo Alto, adding Cisco seems to have something akin to "absolute dominance" in the networking organization and remarkable sway with networking managers, who may have budgets for firewalling security, too. (Cisco doesn't disclose what portion of its firewall sales come from blades in switches and routers or as stand-alone firewall appliances.)
 
Because Cisco and Juniper alike stress that security should be part of the networking infrastructure and be integrated into it, the challenge for a firm such as Palo Alto is to get potential enterprise customers to understand the advantage of application-aware controls. King argues Cisco firewalls are simply stateful inspection with some application controls, and Palo Alto has to win acceptance by proving its NGFW functionality is worth it. According to its latest SEC filing, Palo Alto had 6,000 end-user customers at the start of last year and about 11,000 today.
 
Cisco faces a broad competitive field in IT security, according to IDC. Its main competitors in network security include Check Point, Juniper, Fortinet, McAfee, HP, Palo Alto Networks, IBM, Dell SonicWall and Sourcefire. In messaging security, Symantec, McAfee, Trend Micro, Websense, Barracuda, Sophos, EMC, Microsoft and F-Secure. In Web security, Websense, Trend Micro, McAfee, Barracuda, Sophos, Check Point, Symantec, F-Secure and IBM, among others, keep the pressure on Cisco. Cisco is not considered a major player in the endpoint security market, dominated by Symantec, Intel's company McAfee, Trend Micro, Kaspersky Lab and others.
 
"Cisco is No. 1 in network security, No. 2 in Web security and No. 3 in messaging security," notes IDC analyst Charles Kolodgy. According to IDC, Cisco's IT security revenue bounced back to $1.834 billion by the end of 2012 after sinking the year before to $1.735 billion, and Cisco's fiscal statement last month indicates continuing modest growth in its sales of its IT security products and services, which include firewalls, intrusion-prevention systems, IronPort secure Web gateway and cloud-based ScanSafe service.
 
Cisco hasn't won every match. Cisco edged away from its own denial-of-service mitigation technology, Anomaly Guard and Anomaly Detector Modules, announcing "end of sale" back in 2010. Just this month, Cisco announced an alliance with Arbor Networks -- they had teamed together in the past -- that involves embedding Arbor anti-DDoS technology directly in Cisco routers.
 
And acquisitions remain a way to gain technologies that are seen as important for the future. For example, Cisco just acquired Prague-based Cognitive Security for its behavior-based threat analysis. Cisco's Frampton says this will play a role in identifying threats, especially targeted mobile devices.
Frampton does acknowledge that Cisco could be doing a better job in one area: uniting the security products it has acquired over the years so that they have a more unified policy and management platform. An integrated system, says Frampton, "will happen over the next several years."
 

Sunday, March 24, 2013

Network and Application Root Cause Analysis

 
 
 
Server delay

A few years ago, “Prove it’s not the network” was all that a Network Engineer had to do to get a problem off his back. If he could simply show that throughput was good end to end, latency was low, and there was no packet loss, he could throw a performance problem over the wall to the server and application people.

Today, it’s not quite that simple.
 
Network Engineers have access to tools which give them visibility into the packet level of a problem. This detail in visibility often requires them to work hand in hand with application people all the way through to problem resolution. In order to find performance problems in applications, the network guys have had to take the TCP bull by the horns and simply take ownership of the transport layer.
 
What does that mean?
 
First, it means that “Prove it’s not the network” isn’t enough, as we have already mentioned. But it also means that analyzing TCP windows, TCP flags, and slow server responses has fallen on their shoulders. Since this is the case today, let’s look at a quick list of transport layer issues that the Network Engineer should watch for when analyzing a slow application.
 
1. Check out the TCP Handshake
 
No connection, slow connection, client to server roundtrip time, and TCP Options. These can all be analyzed by looking at the first three packets in the TCP conversation. It’s important when analyzing an application problem to capture this connection sequence. In the handshake, note the amount of time the server takes to respond to the SYN from the client, as well as the advertised window size on both sides. Retransmissions at this stage of the game are a real killer, as the client will typically wait for up to three full seconds to send a retransmission.
 
2. Compare server response time to connection setup time
 
The TCP handshake gave a good idea of the roundtrip time between client and server. Now we can use that timer as a benchmark to measure the server response time. For example, if the connection setup time is 10mSec, and the server is taking 500mSec to respond to client requests, we can estimate that the server is taking around 490mSec to respond. Now, this is not a huge deal when the amount of client requests are low, but if the application is “chatty” (lots of client requests to the server) and we suffer the server delay on each call, this will turn into a bigger problem for performance.
 
3. Watch for TCP Resets
 
There are two ways a client can disconnect from the server, the TCP FIN or the TCP Reset. The FIN is basically a three or four packet mutual disconnect between the client and server. It can be initiated by either side, and is done to open up the connection for other calls. However, when a TCP Reset is sent, this represents an immediate shutdown of the connection. If this is sent by the server, it could be that an inactivity timer expired, or worse, a problem in the application code was triggered. It’s also possible that a device in the middle such as a load balancer sent the reset. These issues can be found in the packet trace by setting a filter for TCP Resets and closely analyzing the sender for the root cause.
 
This list is not exhaustive, but will get you started into looking for the root cause of a performance problem. In future articles on LoveMyTool, we'll give tips and tricks into solving different problems with several analyzers, showing how these can be detected and resolved.
 

Juniper to unveil programmable core switch for software-defined networking



Juniper Networks is readying a new programmable core switch to address software-defined networking in campuses and data centers.

Sources say it is called the EX9200 and is based on the MX router. It comes in three configurations: 4-slot, 8-slot and 14-slot chassis, the same form factors as the company's successful MX 240, 480 and 960 routers for enterprises and service providers.

Juniper MX routers
Credit: Juniper
Sources say that the form factor of Juniper Networks' new programmable core switch is based on the company's MX family of routers, shown here.

An overview of it can be found here. Juniper confirmed it will be unveiling a "new, advanced switch" and offered an embargoed briefing, but Network World declined.

The EX9200 is based on custom silicon -- the Juniper One programmable ASIC. The MX 240, 480 and 960 are based on Juniper's I-Chip and Trio chipsets. The EX9200 will support 240G/slot, and 40xGigabit Ethernet, 32x10G, 4x40G and 2x100G interface line cards.

Programmability features, in addition to the Juniper One ASIC, include an XML- and Netconf-accessible automation toolkit, and Puppet, Python and OpenFlow interfaces. Puppet is an open-source operations management software system; Python is a programming language; and OpenFlow is a popular controller-to-switch protocol and API for SDNs.

SDNs are a way to make network more programmable through software so that they can be reconfigured quickly and functionally extended more easily.

The EX9200 will also support plug-ins to orchestration systems from VMware and OpenStack.
The One ASIC will support VXLAN and NVGRE network virtualization, and MPLS-over-IP as programmable features. Juniper will program the ASIC itself initially but then plans to allow third-parties to eventually program in some features.

The EX9200 will support 1 million MAC addresses, 256k routes, 32,000 VLANs and 256k ACLs. It will support Layer 2/3 switching, MPLS, VPLS, L3VPN, point-to-multipoint, and 50 msec convergence using MPLS Fast Re-Route.

Sources say it will perform two-node Virtual Chassis, where two switches can be linked to form one logical switch, essentially creating a fabric. The EX8200 can be configured into a four-switch Virtual Chassis.

That the EX9200 is based on the MX router -- not the current EX platform -- and uses custom silicon before QFabric is indicative of the challenges Juniper is facing in enterprise and data center switching. The EX8200 has not had a major line card or switch fabric refresh since it was announced in 2008. And QFabric is based on merchant silicon -- Broadcom's Trident chipset -- after custom silicon reportedly failed to work and Juniper was pressed to get a much-hyped, much-anticipated product to market.

In last year's 500-employee, 5% staff reduction, speculation had it that Juniper cut loose many employees working on QFabric. Juniper said those reports were "inaccurate."

Around the same time, RK Anand, one of Juniper's early engineers and head of its data center switching group, departed, along with three other executive vice president-level officials. The data center switching group was also combined with the campus and branch business unit under Jonathan Davidson, whose background is largely in service provider routing.

The EX9200 also raises questions on the status and future of the EX8200 core campus switch, the QFabric Interconnect data center fabric switch and even the EX6200 "Simply Connected" core switch. Juniper has not disclosed or even hinted at any meaningful upgrade plans for the EX8200, such as 40/100G Ethernet or any next-generation switch fabric cards to support high density, high speed interfaces.

Juniper is also cagey on whether there will be a next-generation QFabric and if it will incorporate custom ASICs instead of merchant silicon. That QFabric sales have been slow -- and now that the EX9200 is imminent -- might be a signal that Juniper plans to go in another direction in fabric switching, specifically in the core, or spine.

Sources have said that QFabric has essentially been placed in maintenance mode: that significant development has ceased and only incremental features and fixes are currently planned, such as the possibility of fabric extenders. They also said that Juniper determined QFabric to be too complex and cost too much to build added features onto.

Some sources labeled the EX9200 as a "gap filler," or "MX paint job" until Juniper sorts out its campus and data center switching strategy. Juniper declined to comment on these issues.

Saturday, March 23, 2013

Fiber and 10 Gig Optics


So here’s a quick run down on fiber and 10 gig optics.

Multimode Fiber-Also referred to as ‘OM’ type cable
-Less expensive than single mode (‘OS’) fiber
-Several variants common today (OM1, OM2, OM3, OM4)
-OM1 and OM2 are orange in color and suitable for gigabit speeds
-OM3 and OM4 are aqua in color and suitable for 10 gigabit speeds
-OM1 has a core size of 62.5 microns
-OM2 has a core size of 50 microns
-OM3 has a core size of 50 microns and is laser optimized
-OM4 has a core size of 50 microns and is laser optimized
-Wider cores allow you use less precise light sources such as LEDs
-Typically uses 850nm or 1300nm light sources (LEDs and Lasers)

Single mode Fiber-Also referred to as ‘OS’ type cable
-Allow for a single ‘mode’ (ray of light)
-Generally yellow in color
-A couple of variants common today (OS1, OS2)
-Core is between 8 to 10 microns
-Can support distances of several thousand kilometers
-Requires laser light sources that are in the 1270nm to 1625nm range

10 Gig Optic Standards
10GBaseT – Uses standard Ethernet cabling. Cat6a or 7
10GBaseSR – Uses multimode fiber pairs. Max of 300m with OM3
10GBaseLX4 – Uses 4 different 2.5Gbps laser on different CWDM wavelengths. Only available in X2 or Xenpak because of the size of all 4 lasers in a single module. Original option to allow 10gig over older MMF fiber (not laser optimized fiber).
10GBaseLRM – Same as the LX4 but uses EDC rather than 4 individual lasers. Uses 1310nm lasers. Can reach as far as 220m on OM1,2,and 3 fiber.
10GBaseLR – 10km over OS fiber. There is no minimum distance so this type of optic can be used for short runs as well.
10GBaseER – 40km over OS fiber. Links less than 20km require the signal to be attenuated as to not damage the receiving optic.
10GBaseZR – 80km over OS fiber. Significant attenuation required for runs shorter than 80km links. Not actually a IEEE standard
10GBaseLW – Same as LR optics with the additional ability of being able to interface directly with OC192 transport.
 10 Gig Interface types
There are a variety of different interfaces available for 10 gig interfaces.
XENPAK


X2
XFP






SFP+
sfp+ 

Thursday, March 21, 2013

6500 Architecture and evolution

 
Since I’ve recently become more interested in the actual switching and fabric architectures of Cisco devices I decided to take a deeper look at the 6500 series switches. I’ve worked with them for years but until recently I didn’t have a solid idea on how they actually switched packets. I had a general idea of how it worked and why DFCs were a good thing but I wanted to know more. Based on my research, this is what I’ve come up with. I’d love to hear any feedback on the post since there is a chance that some of what I’ve read isn’t totally accurate. That being said, lets dive right in…
 
Control vs Data Plane
All actions on a switch can be considered to be part of either the control or the data plane. The 6500 series switch is a hardware based switch which implies that it performs switching in hardware rather than software. The pieces of the switch that perform switching in hardware are considered to be part of the data plane. That being said, there still needs to be software component of the switch that tells the data plane how to function. The parts of the switch that function in software are considered to be the control plane. These components make decisions and perform advanced functions which then tell the data plane how to function. Cisco’s implementation of forwarding in hardware is called CEF (Cisco Express Forwarding).
 
Switch DesignThe 6500 series switch is a modular switch that is comprised of a few main components. Let’s take a look at each briefly.
 
The ChassisThe 6500 series switch comes in many shapes and sizes. The most common (in my opinion) is the 6509. The last number indicates the number of slots on the chassis itself. There are also 3, 4, 6, and 13 slot chassis available. The chassis is what holds all of the other components and facilitates connecting them together. The modules plug into a silicon board called the backplane.
 
The BackplaneThe backplane is the most crucial component of the chassis. It has all of the connectors on it that the other modules plug into. It has a few main components that are highlighted on the diagram below.
 
image
The diagram shows the backplane of a standard 9 slot chassis. Each slot has a connection to the crossbar switching fabric, the three buses (D,R,C) that compose the shared bus, and a power connection.
 
The switch fabric in the 6500 is referred to as a ‘crossbar’ fabric. It provides unique paths for each of the connected modules to send and receive data across the fabric. In initial implementations the SUP didn’t have an integrated switch fabric which required the use of a separate module referred to as the SFM (Switch Fabric Module). With the advent of the SUP720 series of SUPs the switch fabric is now integrated into the SUP itself. The cross bar switching fabric provides multiple non-blocking paths between different modules. The speed of the fabric is a function of both the chassis as well as the device providing the switch fabric.
Standard 6500 Chassis
Provides a total of 40Gbps per slot
 
Enhanced(e) 6500 ChassisProvides a total of 80Gbps per slot
 
SFM with SUP32 SupervisorSingle 8 gig Fabric Connection
256Gbps switching fabric
18 Fabric Channels
 
SUP720 through SUP720-3B SupervisorSingle 20 gig Fabric Connection
720Gbps switching fabric
18 Fabric Channels
SUP720-3C SupervisorDual 20 gig Fabric Connections
720Gbps switching fabric
18 Fabric Channels
 
SUP2T SupervisorDual 40 gig Fabric Connections
2.08Tbps switching fabric
26 Fabric Channels
So as you can see, there are quite a few combinations you can use here. The bottom line is that with the newest SUP2T and the 6500e chassis, you could have a module with eight 10Gbps ports that wasn’t oversubscribed.
 
The other bus in the 6500 is referred to as a shared bus. In the initial 6500 implementation the fabric bus wasn’t used. Rather, all communication came across the shared bus. The shared bus is actually comprised of 3 distinct buses.
DBus (Data Bus) – Is the main bus in which all data is transmitted. The speed of the DBus is 32Gbps.
RBus (Results Bus) – Used by the supervisor to forward the result of the forwarding operation to each of the attached line cards. The speed of the RBus is 4Gbps.
CBus (Control Bus) – Relays information between line cards and the supervisor. This is also sometimes referred to as Ethernet Out of Band or EOB or EOBC (Ethernet Out of Band Controller). The speed of the CBus is 100Mbps half duplex.
The Supervisor (Or as well call them, SUPs)
The switch supervisor is the brains of the operation. In the initial implementation of the 6500 the SUP handled the processing of all packets and made all of the forwarding decisions. A supervisor is made up of three main components which include the switch fabric, MFSC (Multi-Layer Switch Feature Card), and the PFC (Policy Feature Card). The image below shows a top down view of a SUP 720 and the location of each component on the physical card.
 
image
MSFC – The Multi-Layer Switch Feature Card is considered to be the control plane of the switch. The MSFC runs processes that help build and maintain the layer 3 forwarding table (routing table), process ACLs, run routing protocols, and other services that are not run in hardware. The MSFC is actually comprised of two distinct pieces.
SP – The SP (Switch Processor) handles booting the switch. The SP copies the SP part of a IOS image from bootlfash, boot’s itself, and then copies the RP part of the IOS image to the RP. Once the RP is booted the SP hands control of the switch over to the RP. From that point on the RP is what the administrator talks to in order to administer the switch. In most cases, the SP still handles layer 2 switch protocols such as ARP and STP.
RP – The RP (Route Processor) handles all layer 3 functions of the 6500 including running routing protocols and building the RIB from with the FIB is populated. Once the FIB is built in the RP it can be downloaded to the data plane TCAM for hardware based forwarding of packets. The RP runs in parallel with the SP which it allows to provide the layer 2 functions of the switch.
PFC – The policy feature card receives a copy of CEF’s FIB from the MFSC. Since the MFSC doesn’t actually deal with forwarding any packets, the MFSC downloads the FIB into the hardware on the PFC. Basically, the PFC is used to accelerate layer 2 and layer 3 switching and it learns how to do that from the MFSC. The PFC is considered to be part of the data plane of the switch.
 
Line CardsThe line cards of a 6500 series switch provide the port density to connect end user devices. Line cards come in different port densities and support many different interface types. Line cards connect to the SUP via the backplane.
 
The other pieces…
The 6500 also has a fan tray slot as well as two slots for redundant power supplies. I’m not going to cover these in detail since they don’t play into the switch architecture.
 
Switching modesNow that we’ve discussed the main components of the 6500 lets talk about the different ways in which a 6500 switches packets. There are 5 main modes in which this occurs and the mode that is used relies heavily on what type of hardware is present in the chassis.
 
Classis modeIn classic mode the attached modules make use of the shared bus in the chassis. When a switchport receives a packet it is first locally queued on the card. The line card then requests permission from the SUP to send the packet on to the DBUS. If the SUP says yes, the packet is sent onto the DBUS and subsequently copied to the SUP as well as all other line cards. The SUP then performs a look up on the PFC. The result of that lookup is sent along the RBUS to all of the cards. The card containing the destination port receives information on how to forward the packet while all other cards receive word to terminate processing on the packet and they delete it from their buffers. The speed of the classic mode is 32gbps half duplex since it’s a shared medium.
 
CEF256In CEF256 mode each module has a connection to the shared 32Gbps bus as well as a 8Gbps connection to the switch fabric. In addition each line card has a local 16Gbps bus (LCDBUS) on the card itself. When a switchport receives a packet it is flooded on the LCDBUS and the fabric interface receives it. The fabric interface floods the packet header onto the DBUS. The PFC receives the header and makes the forwarding decision. The result is flooded on the RBUS back to the line card and the fabric interface receives the forwarding information. At that point, the entire packet is sent across the 8Gbps fabric connection to the destination line card. The fabric interface on the egress line card floods the packet on the LCDBUS and the egress switchport sends the packet on it’s way out of the switch.
 
dCEF256In dCEF256 mode each line card has dual 8Gbps to the switch fabric and no connection to the shared bus. In this method, the line card also has a DFC (Distributed forwarding card) which holds a local copy of the FIB as well as it’s own layer 2 adjacency table. Since the card doesn’t need to forward packets or packet headers to the SUP for processing there is no need for a connection to the shared bus. Additionally, dCEF256 cards have dual 16Gbps local line card buses. The first LCDBUS handles half of the ports on the line card and the second LCDBUS handles the second half of the ports. Communication from a port on one LCDBUS to a port on the second LCDBUS go through the switch fabric. Since the line card has all of the forwarding information that it needs it can forward packets directly across the fabric to the egress line card without talking to the SUP.
 
CEF720Identical operation to CEF256 but includes some upgrades. The switch fabric is now integrated into the SUP rather than on a SFM. And the dual fabric connections from each line card are now 20Gbps a piece rather than 8Gbps.
 
dCEF720Identical to dCEF256 with addition of same upgrades present in CEF720 (Faster fabric connections and SF in SUP).
 
Centralized vs Distributed Forwarding
I had indicated earlier that the early implementations of the switch utilized the SUP to make all switching and forwarding decisions. This would be considered to be centralized switching since the SUP is providing all of the functionality required to forward a packet or frame. Lets take a look at how a packet is forwarded using centralized forwarding.
Line cards by default (in most cases) come with a CFC or centralized forwarding card. The card has enough logic on it to know how to send frames and packets to the Supervisor when it needs an answer. In addition, most cards can accept a DFC or distributed forwarding card. DFCs are the functional equivalent to the PFC located on the SUP and hold an entire copy of CEF’s FIB and adjacency tables. With a DFC in place, a line card can perform distributed forwarding which takes the SUP out of the picture.
 
How centralized forwarding works…
1. Frame arrives at the port on a line card and is passed to the CFC on the local line card.
2. The bus interface on the CFC forwards the headers to the supervisor on the DBus. All other line cards connected to the DBus ignore the headers.
3. The PFC on the supervisor makes a forwarding decision based on the headers and floods the result on the RBus. All other line cards on the RBus ignore the result.
4. The CFC forwards the results ,along with with the packet, to the line cards fabric interface of the line card. The fabric interface forwards the results and the packet onto the switch fabric towards their final destination.
5. The egress line card’s fabric ASIC receives the packet and forwards the data out towards the egress port.
 
How distributed forwarding works…
1. Frame arrives at the port on a line card and is passed to the fabric interface on the local line card.
2. The fabric interface sends just the headers to the DFC located on the local line card.
3. The DFC returns the forwarding decision of it’s lookup to the fabric interface.
4. The fabric interface transmits the packet onto the switch fabric and towards the egress line card
5. Egress line card receives the packet and forwards the packet on to the egress port.
So as you can see, distributed forwarding is much quicker than centralized forwarding just from a process perspective. In addition, it doesn’t require the use of the shared bus.
 
ConclusionThere are many pieces of the 6500 that I didn’t cover in this post but hopefully it’s enough to get you started if you are interested in knowing how these switches work. Hopefully I’ll have time soon to do a similar post on the Nexus 7000 series switch.
 

Tuesday, March 19, 2013

Cisco switches to weaker hashing scheme, passwords cracked wide open


Password cracking experts have reversed a secret cryptographic formula recently added to Cisco devices. Ironically, the encryption type 4 algorithm leaves users considerably more susceptible to password cracking than an older alternative, even though the new routine was intended to enhance protections already in place.

It turns out that Cisco's new method for converting passwords into one-way hashes uses a single iteration of the SHA256 function with no cryptographic salt. The revelation came as a shock to many security experts because the technique requires little time and computing resources. As a result, relatively inexpensive computers used by crackers can try a dizzying number of guesses when attempting to guess the corresponding plain-text password. For instance, a system outfitted with two AMD Radeon 6990 graphics cards that run a soon-to-be-released version of the Hashcat password cracking program can cycle through more than 2.8 billion candidate passwords each second.

By contrast, the type 5 algorithm the new scheme was intended to replace used 1,000 iterations of the MD5 hash function. The large number of repetitions forces cracking programs to work more slowly and makes the process more costly to attackers. Even more important, the older function added randomly generated cryptographic "salt" to each password, preventing crackers from tackling large numbers of hashes at once.

"In my eyes, for such an important company, this is a big fail," Jens Steube, the creator of ocl-Hashcat-plus said of the discovery he and beta tester Philipp Schmidt made last week. "Nowadays everyone in the security/crypto/hash scene knows that password hashes should be salted, at least. By not salting the hashes we can crack all the hashes at once with full speed."

Cisco officials acknowledged the password weakness in an advisory published Monday. The bulletin didn't specify the specific Cisco products that use the new algorithm except to say that they ran "Cisco IOS and Cisco IOS XE releases based on the Cisco IOS 15 code base." It warned that devices that support Type 4 passwords lose the capacity to create more secure Type 5 passwords. It also said "backward compatibility problems may arise when downgrading from a device running" the latest version.

The advisory said that Type 4 protection was designed to use the Password-Based Key Derivation Function version 2 standard to SHA256 hash passwords 1,000 times. It was also designed to append a random 80-bit salt to each password.

"Due to an implementation issue, the Type 4 password algorithm does not use PBKDF2 and does not use a salt, but instead performs a single iteration of SHA256 over the user-provided plaintext password," the Cisco advisory stated. "This approach causes a Type 4 password to be less resilient to brute-force attacks than a Type 5 password of equivalent complexity."

The weakness threatens anyone whose router configuration data may be exposed in an online breach. Rather than store passwords in clear text, the algorithm is intended to store passwords as a one-way hash that can only be reversed by guessing the plaintext that generated it. The risk is exacerbated by the growing practice of including configuration data in online forums. Steube found the hash "luSeObEBqS7m7Ux97dU4qPfW4iArF8KZI2sQnuwGcoU" posted here and had little trouble cracking it. (Ars isn't publishing the password in case it's still being used to secure the Cisco gear.)

While Steube and Schmidt reversed the Type 4 scheme, word of the weakness they uncovered recently leaked into other password cracking forums. An e-mail posted on Saturday to a group dedicated to the John the Ripper password cracker, for instance, noted that the secret to the Type 4 password scheme "is it's base64 SHA256 with character set './0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'." Armed with this knowledge, crackers have everything they need to crack hundreds of thousands, or even millions, of hashes in a matter of hours.

It's hard to fathom an implementation error of this magnitude being discovered only after the new hashing mechanism went live. The good news is that Cisco is openly disclosing the weakness early in its life cycle. Ars strongly recommends that users consider the pros and cons before upgrading their Cisco gear.
 

Friday, March 15, 2013

Huawei launches Global Network & Evolution Centre


Global information communications technology (ICT) services provider, Huawei has announced the launch of its first Global Network Evolution & Experience Centre (GNEEC). The GNEEC was specifically designed and developed to managed the challenges that operators encounter during the network evolution process. The GNEEC helps operators select the optimum network evolution offering that will meet the increasing demands of subscribers while minimising financial risks and generating higher return on investment. The GNEEC covers an area of 1,500 square metres and is equipped with an array of multi-generation E2E network systems with more than 200 sets of equipment supplied by eight mainstream vendors with specialties in all domains of the network infrastructure. Multi-vendor interoperability tests can be performed on over 30 kinds of network evolution plans, while independent testing and professional services areas can be configured to ensure the GNEEC can simulate and support all major evolution processes deployed in global markets.                


 

Wednesday, March 13, 2013

Cisco-funded startup unveils breakthrough router, targets SDNs


A Cisco-funded router startup has unveiled its first product, which the company says implements breakthrough silicon-to-photonics circuitry for scaling service provider networks and enabling them for software-defined networking (SDN).

Compass-EOS this week announced the availability of the r10004, the first in a line of "core-grade" modular routers designed to increase network capacity and speed. The r10004 is powered by icPhotonics, a proprietary technology developed by Compass-EOS that is a chip-to-chip direct silicon-to-photonics implementation designed to provide terabit-per-second connectivity between line cards.

Compass-EOS' patent-protected icPhotonics technology integrates optical and electronic components onto a single microchip to achieve order-of-magnitude Internet speed increases, the company says. Each r10004 can serve as a modular router building block for the deployment of scale-out routing, SDN and network function virtualization, Compass-EOS says.

The r10004 is a 6RU, 800Gbps modular router optimized for core, peering and aggregation requirements. Line card options include 2x100G ports or 20x10G ports.
 
Each line-card features two 1.3Tbps full duplex icPhotonics chips for 2.6Tbps full mesh connectivity between line cards. Such bandwidth can deliver high level SLAs at high utilization rates, improved protection from DDoS attacks at maximum capacity, and congestion-free streaming of multicast video, Compass-EOS says.

The router provides centralized traffic policing vs. distributed policing in traditional routers, which helps protect router processing from being disabled in a DDoS attack, the company says.

"They're making the backplane go faster" through the icPhotonics technology, says Eve Griliches, vice president of optical networking at ACG Research. "Midplane designs limit router capacity due to I/O interconnects. (Compass-EOS) targets the I/O interconnect and makes them as fast as the speed of light."

Compass-EOS is earlier to market with optical I/O than any other router vendor, Griliches says. But Cisco may be working on a similar innovation through its recent acquisition of LightWire.

"The technology is extremely interesting and could be used by any of the router vendors," she says. "If Cisco gets optical I/O they's be in great shape."

For SDNs, Compass-EOS officials say the company has lined up partnerships with developers of SDN controllers to allow those controllers to interact with the r10004 via software commands. Compass-EOS will announce those partnerships at a later date.

The r10004 has been shipping since late 2012 and is available globally. One of the router's customers is a Tier 1 U.S. national cable and Internet Service Provider connecting content data centers in
California and Texas using four 10G Ethernet link aggregation groups to/from each data center, and 100G Ethernet trunks into the core.

Compass-EOS would not disclose the identity of its customers, but cable provider Comcast is also an investor in the company and is deploying Cisco CRS core routers with 100G Ethernet links.

Cisco invested in the company in 2010 and earlier. Compass-EOS has raised $120 million since its founding in 2007, and has more than 150 employees between its facilities in Israel and Milpitas, Calif.
 

Tuesday, March 12, 2013

Is the Middle East The Next Big Market For Cloud Computing?


As cloud computing technology reaches a saturation point in North America many in the space are beginning to look at other markets to supplement growth prospects. According to IDC, the Middle East could be the next major market to adopt cloud computing. A recent report by IDC expects total spending on cloud delivery in Saudi Arabia to increase 34.86 percent year on year in 2012 with long term spending to expand at a compound annual growth rate of 49.7 percent between 2012 and 2016.

In a story published in the Saudi Gazette, Hamza Naqshbandi, senior research analyst for IT services with IDC Saudi Arabia said, “Organizations across the kingdom have traditionally preferred to manage their IT operations internally, however, there has been growing interest in outsourcing models, with organizations increasingly using hosting and managed services. This growing adoption of outsourcing services is seen as a first step toward moving to a cloud-based model, as companies become more comfortable with the concept of remote services delivery.”

Another great example of the opportunities found in the region is the January announcement from Virtustream that it has partnered with Etihad Etisalat (Mobily), the largest telecom operator in the Middle East and Africa to provide cloud services. The joint partnership will provide enterprise cloud services in the Kingdom of Saudi Arabia (KSA) as well as other parts of the region with a suite of cloud based tools and services. The public and hybrid cloud services will be jointly provided by Mobily and Virtustream and will offer world-class cloud services to enterprises and small-to-medium businesses (SMBs).

“The Mobily/Virtustream cloud platform provides enterprise-class cloud services in the Middle East,” said Dr. Marwan al Ahmadi, chief business officer at Mobily. “We look forward to working with Virtustream to provide our customers with best-in-class enterprise cloud solutions that are the first of their kind in the KSA market.” Virtustream isn’t alone in the market.

Global data storage and network specialists like Cisco Systems, Hewlett Packard, EMC, Germany‘s Siemensor Japan’s NEC have also entered market to try to take advantage of the opportunities found in the region. China’s Huawei is making major moves as well. It recently launched in Dubai a mobile cloud center which will provide a variety of cloud and data center services to companies in the region.

Even with signifcant foreign investment in the market, cloud computing in the Middle East is still in its early stages. If you know of any interesting cloud related companies or projects in the region, please post in the comments area below.
 

Thursday, March 7, 2013

Network Functions Virtualisation – Introductory White Paper


The key objective for this white paper is to outline the benefits, enablers and challenges for Network
Functions Virtualisation (as distinct from Cloud/SDN) and the rationale for encouraging an international collaboration to accelerate development and deployment of interoperable solutions
based on high volume industry standard servers.

This document covers Introduction, Benefits, Enablers, Challenges and Calls for Actions.

Download here -

http://portal.etsi.org/NFV/NFV_White_Paper.pdf


 

Monday, March 4, 2013

Cisco virtual router targets the cloud


The Cisco CSR 1000V router is designed for enterprise network managers who want to have a little piece of their Cisco infrastructure in the cloud.

The Cisco CSR 1000V router is designed for enterprise network managers who want to have a little piece of their Cisco infrastructure in the cloud.

Whether that's for firewalling, VPN or dynamic routing the CSR 1000V supports all major technologies in IOS -- the idea is that a virtual router gives the network manager the flexibility to enforce policy, connect or provide high availability using familiar Cisco tools and technologies.

We tested the CSR 1000V, a full-featured IOS XE router running v3.8S of XE in a VMware-compatible virtual machine. Does it work? Yes, in fact, it works just fine. It works great, actually.
In our functional testing, the CSR 1000V met all the requirements we'd expect for this type of environment. We tried bringing up VPN connections, defining firewall and NAT rules, and running both OSPF and BGP routing protocols. We set up two CSR 1000V virtual machines on two different hosts, and used HSRP to failover between them. We exercised both IPv4 and IPv6, and we tested management with Cisco's ever-popular command line, as well as SNMP monitoring and remote SYSLOG logging.

With thousands of pages of IOS documentation, we may not have scratched the surface of full functional testing, but certainly the key features that we think most enterprise network managers will want are all in place and working just fine.

The version of the CSR 1000V we tested is only supported on VMware's ESXi 5 infrastructure. We looked at an early release version; Cisco told us that the virtual appliance should be available to all customers around March. At that time, IOS XE will be upgraded to v3.9. Cisco is also predicting that it will support Amazon Web Services (based on Citrix's XenServer hypervisor technology) and Red Hat KVM with the v3.10 release of the CSR 1000V in July. Microsoft's Hyper-V isn't on Cisco's public road map, at least not yet.

Running the CSR 1000V is not for the faint of heart. We started out with the idea that we'd put it on our test VMware farm, which was running older servers with vSphere v4. In years of testing, that's never given anyone a problem until Cisco came along.

The CSR 1000V not only requires vSphere v5, but also has very strict hardware requirements, including a minimum of four physical (not virtual, but physical) cores, all in the same socket, dedicated to the CSR 1000V without any sharing, 4GB of memory, and Intel Nehalem or newer CPUs. Don't follow the specifications, and you've got a crashing CSR 1000V, which isn't much fun.
Cisco told us that it is considering allowing future versions of the CSR 1000V to share CPUs with other virtual machines, but the version we tested doesn't have that option: Four CPU cores had to be exclusively dedicated to the CSR 1000V.

Compared to alternative software router technologies, the CSR 1000V is fairly heavyweight. The requirement for so many physical cores and the newer CPUs may limit the options for deploying CSR 1000V in clouds running older hardware or ones without quad-core CPUs.

Performance

We looked at performance on the CSR 1000V and found that it meets its requirements, but they're pretty modest. Cisco technical staff told us that they've gotten up to 1Gbps out of the CSR 1000V, but the official data sheet cuts that number considerably, to 50Mbps. Cisco told us to expect higher throughput (in the 1Gbps range, depending on hardware, of course) in future versions later in 2013.

Cisco may be shooting low here for some reason, but we think that network managers might be disappointed with this level of performance in cloud deployments. After all, one of the reasons for using cloud service providers is to get extra bandwidth at lower cost. The performance we saw would be fine for typical management and off-site database applications, but you wouldn't want to put the CSR 1000V in front of an Internet-facing Web server unless the bandwidth requirements were very low.

With the CSR 1000V in pure routing mode and sending and receiving packets from external devices outside of the VMware environment, we were able to push about 48Mbps through it before it started dropping packets, just about hitting that 50Mbps number. While the overall system CPU wasn't really breaking a sweat at that level, one of the four cores was flat-lined at nearly 100%. Either Cisco is wasting cycles as part of its bandwidth cap, or the virtual appliance was topped out. We confirmed this suspicion by turning on firewall and NAT features, and got the same within-data-sheet performance, although with a higher CPU load spread across more cores.

We validated that the VMware hardware we were using (a Dell R610 server) was not the problem by loading up the open source Vyatta router on the same hardware and pushing a hefty 500Mbps (input) through the hardware, using only a single CPU core and a single external Gigabit Ethernet port. We also tested the CSR 1000V and the Vyatta router on Cisco's own UCS Express hardware, with the same results.

With Cisco pushing the AppNav-XE technology into the CSR 1000V, the low throughput may inhibit adoption in Internet-facing applications.

AppNav is Cisco coming backward into the load balancer world no one wants to compete head-on with F5, not even Cisco with a coordinated technology that handles distribution of traffic from the WAN into application servers, such as instant messaging, file sharing, Web traffic and Microsoft Exchange.

AppNav is officially "complementary" to Cisco's older WCCP (Web Cache Communication Protocol), the much-maligned load distribution and redirection technology Cisco took on when it purchased ArrowPoint Communications in 2000. But many network managers will discover that with AppNav they can do away with ugly and complicated WCCP deployments.

We successfully built a small AppNav deployment, putting the CSR 1000V in front of two other virtual machines running Web services and found it easy to put together with ample documentation but we didn't stress AppNav's configuration capabilities or try and scale up because of the 50Mbps limit on the CSR 1000V.

Licensing

For years, IOS users have gotten away with simple and non-intrusive licensing models from Cisco. The CSR 1000V tries to keep a fairly lightweight licensing model, but there's no question that Cisco is not giving this virtual hardware away. Starting with the March release, you'll be able to license the appliance on a term basis. This means that you have to buy a one-, three- or five-year license, and when that license expires, the CSR 1000V throttles traffic down to 2.5Mbps.

To lock down the CSR 1000V virtual machine as much as possible, Cisco has built a licensing scheme that requires a different license for each virtual machine. Although you can vMotion the CSR 1000V all over your network without requiring a new license, you can't just clone a legal CSR 1000V to get a second CSR 1000V appliance -- you must pay for and apply a different license to the cloned VM.

Network managers looking for high availability can either use the built-in high-availability features of VMware to resurrect a single CSR 1000V, if the host hardware fails, or can use Cisco's own HSRP to keep two (or more) legally licensed CSR 1000Vs alive all the time. Or both.

Overall, Cisco has come to a reasonable approach to keep its intellectual property intact. And network managers intent on using the CSR 1000V for their CCIE study labs shouldn't fear, as the CSR 1000V has a 60-day evaluation mode that doesn't require a license.