Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Saturday, June 30, 2012

Advanced MPLS Interview Questions - Part 2

1. What group is responsible for creating MPLS standards?

The IETF's MPLS Working Group is charged with establishing core MPLS standards. Other IETF working groups are charged with developing standards covering areas such as Generalized MPLS, MPLS network management, Layer 2 encapsulation, L2 & L3 VPN services, and MPLS Traffic Engineering.

In addition, industry groups such as the Optical Internetworking Forum (OIF), The Optical Ethernet Forum, and the MFA Forum (MPLS/Frame/ATM) are working on other MPLS standards not related to the areas of focus of the IETF.

2. What is the MFA Forum?

The MFA is the union of the MPLS Forum, Frame Relay Forum, and ATM Forum. The MFA is an industry consortium dedicated to accelerating the adoption of Multiprotocol Label Switching (MPLS) and its associated technologies.

3. What MPLS related mailing lists are there and what are they used for?

There following is a list of current MPLS-related mailing lists:
The IETF's MPLS Working Group mailing list, which can be joined https://www1.ietf.org/mailman/listinfo/mpls. This list is for discussion of MPLS standards development. Note that several of the other IETF working groups also host mailing lists for discussion of MPLS standards for specific applications.

The MPLS-OPS mailing list, which can be joined by visiting http://www.mplsrc.com/mplsops.shtml. This list is for the discussion of issues related to the design, deployment and management of MPLS-based networks

LINUXMPLS - A Yahoo-based group and mailing list for the discussion of MPLS implementations for LINUX can be accessed at:

http://groups.yahoo.com/group/linuxmpls

4. What is MPLS?

MPLS stands for "Multiprotocol Label Switching". In an MPLS network, incoming packets are assigned a "label" by a "label edge router (LER)". Packets are forwarded along a "label switch path (LSP)" where each "label switch router (LSR)" makes forwarding decisions based solely on the contents of the label. At each hop, the LSR strips off the existing label and applies a new label which tells the next hop how to forward the packet.

Label Switch Paths (LSPs) are established by network operators for a variety of purposes, such as to guarantee a certain level of performance, to route around network congestion, or to create IP tunnels for network-based virtual private networks. In many ways, LSPs are no different than circuit-switched paths in ATM or Frame Relay networks, except that they are not dependent on a particular Layer 2 technology.

An LSP can be established that crosses multiple Layer 2 transports such as ATM, Frame Relay or Ethernet. Thus, one of the true promises of MPLS is the ability to create end-to-end circuits, with specific performance characteristics, across any type of transport medium, eliminating the need for overlay networks or Layer 2 only control mechanisms.

To truly understand "What is MPLS", RFC 3031 - Multiprotocol Label Switching Architecture, is required reading.


5. How did MPLS evolve?

MPLS evolved from numerous prior technologies including Cisco's "Tag Switching", IBM's "ARIS", and Toshiba's "Cell-Switched Router". More information on each of these technologies can be found at http://www.watersprings.org/links/mlr/.

The IETF's MPLS Working Group was formed in 1997.

6. What problems does MPLS solve?

The initial goal of label based switching was to bring the speed of Layer 2 switching to Layer 3. Label based switching methods allow routers to make forwarding decisions based on the contents of a simple label, rather than by performing a complex route lookup based on destination IP address. This initial justification for technologies such as MPLS is no longer perceived as the main benefit, since Layer 3 switches (ASIC-based routers) are able to perform route lookups at sufficient speeds to support most interface types.

However, MPLS brings many other benefits to IP-based networks, they include:

Traffic Engineering - the ability to set the path traffic will take through the network, and the ability to set performance characteristics for a class of traffic

VPNs - using MPLS, service providers can create IP tunnels throughout their network, without the need for encryption or end-user applications

Layer 2 Transport - New standards being defined by the IETF's PWE3 and PPVPN working groups allow service providers to carry Layer 2 services including Ethernet, Frame Relay and ATM over an IP/MPLS core

Elimination of Multiple Layers - Typically most carrier networks employ an overlay model where SONET/SDH is deployed at Layer 1, ATM is used at Layer 2 and IP is used at Layer 3. Using MPLS, carriers can migrate many of the functions of the SONET/SDH and ATM control plane to Layer 3, thereby simplifying network management and network complexity. Eventually, carrier networks may be able to migrate away from SONET/SDH and ATM all-together, which means elimination of ATM's inherent "cell-tax" in carrying IP traffic.

7. What is the status of the MPLS standard?

Most MPLS standards are currently in the "Internet Draft" phase, though several have now moved into the RFC-STD phase. See "MPLS Standards" for a complete listing of current ID's and RFC's. For more information on the current status of various Internet Drafts, see the IETF's MPLS Working Group home page at http://www.ietf.org/html.charters/mpls-charter.html

There's no such thing as a single MPLS "standard". Instead there a set of RFCs and IDs that together allow the building of an MPLS system. For example, a typical IP router spec. sheet will list about 20 RFCs to which this router will comply. If you go to the IETF web site (http://www.ietf.org), then click on "I-D Keyword Search", enter "MPLS" as your search term, and crank up the number of items to be returned, (or visit http://www.mplsrc.com/standards.shtml) you'll find over 100 drafts currently stored. These drafts have a lifetime of 6 months.

Some of these drafts have been adopted by the IETF WG for MPLS. The filename for these drafts is prefixed by "draft-ietf-". Some of these drafts are now on the IETF Standards Track. This is indicated in the first few lines of the document with the term "Category: Standards Track". You can read up on this process in RFC 2600.

MPLS Components

8. What is a Label?

Section 3.1 of RFC 3031: "Multiprotocol Label Switching Architecture" defines a label as follows "A label is a short, fixed length, locally significant identifier which is used to identify a FEC. The label which is put on a particular packet represents the "Forwarding Equivalence Class" to which that packet is assigned."

The MPLS Label is formatted as follows:

-20bits Label--3bits CoS--1bit Stack--8bits TTL-

The 32-bit MPLS label is located after the Layer 2 header and before the IP header. The MPLS label contains the following fields:

The label field (20-bits) carries the actual value of the MPLS label.

The CoS field (3-bits) can affect the queuing and discard algorithms applied to the packet as it is transmitted through the network.

The Stack (S) field (1-bit) supports a hierarchical label stack.

The TTL (time-to-live) field (8-bits) provides conventional IP TTL functionality. This is also called a "Shim" header.

9. What is a Label Switch Path?

An LSP is a specific path traffic path through an MPLS network. An LSP is provisioned using Label Distribution Protocols (LDPs) such as RSVP-TE or CR-LDP. Either of these protocols will establish a path through an MPLS network and will reserve necessary resources to meet pre-defined service requirements for the data path.

LSPs must be contrasted with traffic trunks. From RFC 2702: "Requirements for Traffic Engineering Over MPLS," "A traffic trunk is an aggregation of traffic flows of the same class which are placed inside a LSP. It is important, however, to emphasize that there is a fundamental distinction between a traffic trunk and the path, and indeed the LSP, through which it traverses. In practice, the terms LSP and traffic trunk are often used synonymously. The path through which a trunk traverses can be changed. In this respect, traffic trunks are similar to virtual circuits in ATM and Frame Relay networks."

10. What is a Label Distribution Protocol?

A label distribution protocol (LDP) is a specification which lets a label switch router (LSR) distribute labels to its LDP peers. When a LSR assigns a label to a forwarding equivalence class (FEC) it needs to let its relevant peers know of this label and its meaning and LDP is used for this purpose. Since a set of labels from the ingress LSR to the egress LSR in an MPLS domain defines a Label Switched Path (LSP) and since labels are mapping of network layer routing to the data link layer switched paths, LDP helps in establishing a LSP by using a set of procedures to distribute the labels among the LSR peers.

Label Switching Routers (LSRs) use labels to forward traffic. A fundamental step to Label Switching is that LSRs agree on the what labels they should use to forward traffic. They come to this common understanding by using the Label Distribution

Label Distribution Protocol is a major part of MPLS. Similar mechanisms for Label exchange existed in vendor implementations like Ipsilonâs Flow Management Protocol (IFMP), IBMâs Aggregate Route-based IP Switching (ARIS), and Ciscoâs Tag Distribution Protocol. LDP and labels are the foundation of Label Switching.

LDP has the following basic characteristics:

It provides an LSR discovery mechanism to enable LSR peers to find each other and establish communication
It defines four classes of messages: DISCOVERY, ADJACENCY, LABEL ADVERTISEMENT, and NOTIFICATION messages
It runs over TCP to provide reliable delivery of messages (with the exception of DISCOVERY messages
LDP label distribution and assignment may be performed in several different modes:


Unsolicited downstream versus downstream-on-demand label assignment
Order versus independent LSP control
Liberal versus conservative label retention

11. What's the difference between CR-LDP and RSVP-TE

CR-LDP and RSVP-TE are both signaling mechanisms used to support Traffic Engineering across an MPLS backbone. RSVP is a QoS signaling protocol that is an IETF standard and has existed for quite some time. RSVP-TE extends RSVP to support label distribution and explicit routing while CR-LDP proposed to extend LDP (designed for hop-by-hop label distribution to support QoS signaling and explicit routing). MPLS Traffic Engineering tunnels are not limited to IP route selection procedures and thus will spread network traffic more uniformly across the backbone taking advantage of all available links. A signaling protocol is required to set up these explicit MPLS routes or tunnels.


There are many similarities between CR-LSP and RSVP-TE for constraint-based routing. The Explicit Route Objects that are used are extremely similar. Both protocols use ordered Label Switched Path (LSP) setup procedures. Both protocols include some QoS information in the signaling messages to enable resource allocation and LSP establishment to take place automatically.

At the present time CD-LDP development has ended and RSVP-TE has emerged as the "winner" for traffic engineering protocols.


12. What is a "Forwarding Equivalency Class"?


Forwarding Equivalency Class (FEC) is a set of packets which will be forwarded in the same manner (e.g., over the same path with the same forwarding treatment). Typically packets belonging to the same FEC will follow the same path in the MPLS domain. While assigning a packet to an FEC the ingress LSR may look at the IP header and also some other information such as the interface on which this packet arrived. The FEC to which a packet is assigned is identified by a label.

One example of an FEC is a set of unicast packets whose network layer destination address matches a particular IP address prefix. A set of multicast packets with the same source and destination network layer addresses is another example of an FEC. Yet another example is a set of unicast packets whose destination addresses match a particular IP address prefix and whose Type of Service bits are the same

13. How are Label Switch Paths built?


A Label Switch Path (LSP) is a set of LSRs that packets belonging to a certain FEC travel in order to reach their destination. Since MPLS allows hierarchy of labels known as label stack, it is possible to have different LSPs at different levels of labels for a packet to reach its destination. So more formally, a LSP of a packet with a label of level m is a set of LSRs that a packet p has to travel at level m to reach its destination. Please refer to 3.15 of RFC 3031 - Multiprotocol Label Switching Architecture, for a very formal and complete definition.



g. What is the relationship between MPLS and the Interior Routing Protocol

Interior Gateway Protocols (IGP), such as OSPF and IS-IS, are used to defined reachability and the binding/mapping between FEC and next-hop address. MPLS learns routing information from IGP (e.g., OSPF, IS-IS). Link-state Interior Gateway Protocol is typically already running on large Corporations or Service Providers networks There are no changes required to IGP routing protocols to support MPLS, MPLS-TE, MPLS QoS, or MPLS-BGP VPNs.

14. What other protocols does MPLS support besides IP?


By definition, Multiprotocol Label Switching supports multiple protocols. At the Network Layer MPLS supports IPv6, IPv4, IPX and AppleTalk. At the Link Layer MPLS supports Ethernet, Token Ring, FDDI, ATM, Frame Relay, and Point-to-Point Links. It can essentially work with any control protocol other than IP and layer on top of any link layer protocol. In addition, development efforts have allowed MPLS to not only work over any data link layer protocol, but also to natively carry a data link layer protocol over IP, thus enabling services such as Ethernet over MPLS.

MPLS and ATM


15. What are the differences between MPLS and ATM?

MPLS brings the traffic engineering capabilities of ATM to packet-based network. It works by tagging IP packets with "labels" that specify a route and priority. It combines the scalability and flexibility of routing with performance and traffic management of layer 2 switching. It can run over nearly any transport medium (ATM, FR, POS, Ethernet...) instead of being tied to a specific layer-2 encapsulation. As it uses IP for its addressing, it uses common routing/signaling protocols (OSPF, IS-IS, RSVP...)


16. Does MPLS replace ATM?


MPLS was not designed to replace ATM but, the practical reality of the dominance of IP-based protocols coupled with MPLS's inherent flexibility has led many service providers to migrate their ATM networks to one based on MPLS.


MPLS can co-exist with ATM switches and eliminate complexity by mapping IP addressing and routing information directly into ATM switching tables. The MPLS label-swapping paradigm is the same mechanism that ATM switches use to forward ATM cells. For ATM-LSR the label swapping function is performed by the ATM forwarding component. Label information is carried in the ATM Header, specifically the VCI and VPI fields. MPLS provides the control component for IP on both the ATM switches and routers. For ATM switches PNNI, ATM ARP Server, and NHRP Server are replaced with MPLS for IP services. The ATM fowarding plane (i.e 53-byte cells) are preserved. PNNI may still used on ATM switches to provide ATM services for non-MPLS ports. Therefore, an IP+ATM switch delivers the best of both worlds; ATM for fast switching and IP protocols for IP services all in a single switch.


17. What is "Ships in the night"?


Some vendors support the running of MPLS and ATM in the same device. Generally speaking, these two processes run separately. A change in an MPLS path has no bearing on ATM virtual circuits. This practice is commonly referred to "ships in the night" since the two processes act alone. However, in some cases, there is some interaction between the two processes. For example, some vendors support a mechanism whereby a reservation of resources by a label switch path is detected by the ATM control mechanism to avoid resource conflicts.

"Ships in the night" is used as a transitioning mechanism as networks migrate their ATM control planes to MPLS. Networks initially preserve ATM for carrying time sensitive data traffic such as voice and video, and for connecting to non-MPLS enabled nodes, while concurrently running MPLS to carry data. Over time there will no longer be a need for separate ATM flows and therefore networks will only carry MPLS label-based traffic.


MPLS Traffic Engineering


18. What does MPLS traffic engineering accomplish?

Traffic engineering refers to the process of selecting the paths chosen by data traffic in order to balance the traffic load on the various links, routers, and switches in the network. Traffic engineering is most important in networks where multiple parallel or alternate paths are available.


A major goal of Internet Traffic Engineering is to facilitate efficient and reliable network operations while simultaneously optimizing network resource utilization and traffic performance.

The goal of TE is to compute a path from one given node to another (source routing), such that the path does not violate the constraints (e.g. Bandwidth/administrative requirements...) and is optimal with respect to some scalar metric. Once the path is computed, TE (a.k.a. Constraint based routing) is responsible for establishing and maintaining forwarding state along such a path.

19. What are the components of MPLS-TE?

In order to support Traffic engineering, besides explicit routing (source routing), the following components should be available:

Ability to compute a path at the source by taking into account all the constraints. To do so the source need to have all the information either available locally or obtained from other routers in the network (e.g. Network topology)

Ability to distribute the information about network topology and attributes associated with links throughout the network once the path is computed, need a way to support forwarding along such a path

Ability to reserve network resources and to modify link attributes (as the result of certain traffic taking certain routes)

MPLS TE leverages several foundation technologies:

Constraint shortest path first algorithm used in path calculation. This is a modified version of the well known SPF algorithm extended to constraints support

RSVP extension used to establish the forwarding state along the path, as well as to reserve resources along the path

Link state IGPs with extension (OSPF with Opaque LSAs, IS-IS with Link State Packets TLV (type, length, value)) keeping track of topology changes propagation


20. How does MPLS merge traffic flows?

MPLS allows the mapping from IP packet to forwarding equivalence class (FEC) to be performed only once at the ingress to an MPLS domain. A FEC is a set of packets that can be handled equivalently for the purpose of forwarding and thus is suitable for binding to a single label.


From a forwarding point of view, packets within the same subset are treated by the LSR in the same way, even if the packets differ from each other with respect to the information in the network layer header. The mapping between the information carried in the network layer header of the packets and the entries in the forwarding table of the LSR is many to one. That is packets with different content of their network layer headers could be mapped into the same FEC. (example of a FEC: set of unicast packets whose network layer destination address match a particular IP address prefix...)

21. How are loops prevented in MPLS networks?

Before focusing on MPLS loops prevention, let's introduce briefly the different loops handling schemes.

Generally speaking, loop handling can be split into two categories:

Loop prevention: provides methods for avoiding loops before any packets are sent on the path - i.e. Path Vector

Loop mitigation (survival+detection): minimize the negative effects of loopseven though short term transient loops may be formed. - i.e. Time-To-Live (TTL). If the TTL reaches 0, then the packet is discarded

Dynamic routing protocols which converge rapidly to non-looping paths

As far as loop mitigation is concerned, MPLS labeled packets may carry a TTL field that operates just like the IP TTL to enable packets caught in transient loops to be discarded.


However, for certain medium such as ATM and Frame Relay, where TTL is not available, MPLS will use buffer allocation as a form of loop mitigation. It is mainly used on ATM switches which have the ability to limit the amount of switch buffer space that can be consumed by a single VC.

Another technique for non TTL segment is the hop count approach: hop count information is carried within the Link Distribution Protocol messages [3]. It works like a TTL. Hop count will decrease by 1 for every successful label binding.

A third alternative adopted by MPLS is an optional loop detection technique called path vector. A path vector contains a list of the LSRs that label distribution control message has traversed. Each LSR which propagates a control packet (to either create or modify an LSP) adds its own identifier to the path vector list. Loop is detected when an LSR receives a message with a path vector that contains its own identifier. This technique is also used by the BGP routing protocol with its AS path attribute.

22. How does MPLS perform failure recovery?

When a link goes down it is important to reroute all trunks that were routed over this link. Since the path taken by a trunk is determined by the LSR at the start of the MPLS path (head end), rerouting has to be performed by the head end LSR. To perform rerouting, the head end LSR could rely either on the information provided by IGP or by RSVP/CR-LDP.

However, several MPLS-specific resiliency features havebeen developed including Fast Re-Route, RAPID, and Bidirectional Forwarding. See RFC 3469: "Framework for Multi-Protocol Label Switching (MPLS)-based Recovery" for additional information.

23. What differences are there in running MPLS in OSPF versus IS-IS environments?


This is not an MPLS question but an IGP (Interior Gateway Protocol) question. MPLS extensions, stated in IEFT RFC's, are supported for both OSPF and IS-IS. MPLS and BGP-VPN real-world deployments have been on both protocols for some time now.

There is much debate over which IGP is best. This is usually centered around scalability. The street word is that IS-IS is more scaleable than OSPF. That is, a single OSPF area can support 150 plus routers and a single IS-IS area can support 500 plus routers. However, very large IS-IS and OSPF networks have been deployed.

Ultimately, it is best to first understand the benefits and disadvantages of each protocol. Then use the customer / network requirements to choice the IGP which best suites your needs.

24. Can there be two or more Autonomous Systems within the same MPLS domain?

This is possible only under very restricted circumstances. Consider the ASBRs of two adjacent ASes. If either or both ASBRs summarize eBGP routes before distributing them into their IGP, or if there is any other set-up where the IGP routes cover a set of FECs which differs from that of the eBGP routes (and this would almost always be the case), then the ASBRs cannot forward traffic based on the top-level label. A similar argument applies to TE tunnels. Some traffic usually will be either IP forwarded by the ASBR, or forwarded based on a non-top-level label.

So there would usually be 2-3 MPLS forwarding domains if there were two ASes: one for each of the two ASes, and possibly one for the link between the two ASBRs (in the case that labelled packets instead of IP packets are forwarded between the two ASBRs).

Also, it's likely that the ASBRs could not be ATM-LSRs, as ATM-LSRs typically have limited or no capability of manipulating label stacks or forwarding unlabelled IP traffic.

Another example (thanks to Robert Raszuk) is with the multi-provider application of BGP+MPLS VPNs. As described earlier, there are usually no *top-level* LSPs established across the two (or more) provider ASes involved, so it can be argued that:

The two ASes are separate administrative domains. However there are some LSPs established across the two ASes, at a lower level in the label stack. So, it can be argued that


(1) and (2) are both true, which implies that different definitions of the boundary of the administrative domains can exist with respect to different levels in the label stack. It is also (in hindsight) obvious that different MPLS domain boundaries can exist with respect to different levels of the label stack.

MPLS VPNs

25. How does MPLS enable VPNs?

Since MPLS allows for the creation of "virtual circuits" or tunnels, across an IP network, it is logical that service providers would look to use MPLS to provision Virtual Private Network services. Several standards have been proposed to allow service providers to use MPLS to provision VPN services that isolate a customers traffic across the provider's IP network and provide secure end-to-end connectivity for customer sites.

It should be noted that using MPLS for VPNs simply provides traffic isolation, much like an ATM or Frame Relay service. MPLS currently has no mechanism for packet encryption, so if customer requirements included encryption, some other method, such as IPsec, would have to be employed. The best way to think of MPLS VPNs is to consider them the equivalent of a Frame Relay or ATM virtual circuit.

26. What alternatives are there for implementing VPNs over MPLS

There are multiple proposals for using MPLS to provision IP-based VPNs. One proposal (MPLS/BGP VPNs) enabled MPLS-VPNs via extensions to Border Gateway Protocol (BGP). In this approach, BGP propagates VPN-IPv4 information using the BGP multiprotocol extensions (MP-BGP) for handling these extended addresses. It propagates reachability information (VPN-IPv4 addresses) among Edge Label Switch Routers (Provider Edge router). The reachability information for a given VPN is propagated only to other members of that VPN. The BGP multiprotocol extensions identify the valid recipients for VPN routing information. All the members of the VPN learn routes to other members.

Another proposal for using MPLS to create IP-VPN's is based on the idea of maintaining separate routing tables for various virtual private networks and does not involve BGP.

Most implementations of Layer 3 MPLS-VPNs are based on RFC-2547.

27. What is the "Martini Draft'?

The "Martini Draft" actually refers to set of Internet drafts co-authored by Luca Martini. These drafts define how MPLS can be used to support Layer 2 transport services such as Ethernet, Frame Relay and/or ATM. Martini drafts define Layer 2 encapsulation methods, as well as Layer 2 transport signaling methods.

Many service providers wish to use MPLS to provision L2-based services to provide an easy migration for the current L2 service customers, while the providers migrate their networks to MPLS. Service providers can use standards such as Martini Draft to provide a myriad of services over their MPLS networks, so customers can simply choose the technology that is best suited to their environment.


The Psuedo Wire Emulation Edge-to-Edge (PWE3) working group is currently developing standards for Layer 2 encapsulation (including Draft-Martini and other supporting standards). Current working group drafts can be located at www.mplsrc.com/standards.shtml under the sub-heading "Layer 2 VPNs and Layer 2 Emulation."

28. What is a "Layer 2 VPN"

Layer 2 VPNs are an extension of the work being undertaken in the PWE3 working group. Layer 2 VPNs allow service providers to provision Layer 2 services such as Frame Relay, ATM and Ethernet between customer locations over an IP/MPLS backbone. Service providers can thus provision Layer 2 services over their IP networks, removing the need to maintain separate IP and Frame Relay/ATM network infrastructures. This allows service providers to simplify their networks and reduce operating expenses.

The IETF's "Layer 2 Virtual Private Networks (l2vpn)" working group is currently defining standards for provisioning Layer 2 VPN services. Current working group drafts can be located at www.mplsrc.com/standards.shtml under the sub-heading "Layer 2 VPNs and Layer 2 Emulation."

29. What is a Virtual Private LAN Service (VPLS)?

VPLS refers to a method for using MPLS to create virtual LAN services based on Ethernet. In this type of service, all edge devices maintain MAC address tables for all reachable end nodes, much in the same way as a LAN switch.

VPLS services enable enterprises to provide Ethernet reachability across geographic distances served by MPLS services. Several alternatives for enabling VPLS services are in development by the L2VPN working group. Please refer to drafts from that working group for additional information. Also see the Juniper Network's White Paper "VPLS: Scalable Transparent LAN Services."

30. Are MPLS-VPNs secure?

Among many network security professionals, the term "VPN" implies "encrypted" tunnels across a public network. Since MPLS-VPNs do not require encryption, there is often concern over the security implications of using MPLS to tunnel non-encrypted traffic over a public IP network. There are a couple of points to consider in this debate:

MPLS-VPN traffic is isolated by the use of tags, much in the same way ATM and Frame Relay PVCs are kept isolated in a public ATM/Frame Relay network. This implies that security of MPLS-VPNs is equivalent to that of Frame Relay or ATM public network services. Interception of any of these three types of traffic would require access to the service provider network.

MPLS-VPNs do not prohibit security. If security is an issue, traffic can be encrypted before it is encapsulated into MPLS by using a protocol such as IPSec or SSL.

The debate over MPLS security really comes down requirements of the customer. Customers comfortable with carrying their traffic over public ATM or Frame Relay services should have the same level of comfort with MPLS-VPN services. Customers requiring additional security should employ encryption in addition to MPLS.


MPLS Quality of Service

31. What kinds of QoS protocols does MPLS support?

MPLS supports the same QoS capabilities as IP. These mechanisms are IP Precedence, Committed Access Rate (CAR), Random Early Detection (RED), Weighted RED, Weighted Fair Queuing (WFQ), Class-based WFQ, and Priority Queuing. Proprietary and non-standard QoS mechanisms can also be support but are not guaranteed to interoperate with other vendors.

Since MPLS also supports reservation of Layer 2 resources, MPLS can deliver finely grained quality of service, much in the same manner as ATM and Frame Relay.


32. How do I integrate MPLS and DiffServ

DiffServ can support up to 64 classes while the MPLS shim label supports up to 8 classes. This shim header has a 3-bit field defined ãfor experimental use. This poses a problem. This Exp field is only 3 bits long, whereas the Diff-Serv field is 6 bits. There are different scenarios to work around this problem.

There are two alternatives that address this problem called Label-LSP and Exp-LSP models. But they introduce complexity into the architecture. The diffserv model essentially defines the interpretation of the TOS bits. As long as the IP precedence bits map to the Exp bits the same interpretation as the diffserv model can be applied to these bits. In the case where additional bits are used in the diffserv model, one can essentially use the label value to interpret the meaning of the remaining bits. Recognizing that 3 bits are sufficient to identify the required number of classes, the remaining bits in the diffserv model are used for identifying the drop priority and these drop priorities can be mapped into an L-LSP in which case the label identifies the drop priority while the exp bits identify the Class that the packet belongs to.

Many Service Provides have or will add just a few classes. This small enhancement will be hard enough to provision, manage and sell. This would be an effective strategy to get to market quickly with a value-added service.

The followings classes may be more appropriate for the initial deployment of MPLS QoS:

High-priority, low-latency "Premium" class (Gold Service)
Guaranteed-delivery "Mission-Critical" class (Silver Service)
Low-priority "Best-Effort" class (Bronze Service)

33. How do I integrate MPLS and ATM QoS ?

MPLS makes it possible to apply QoS across very large routed or switched networks because Service Providers can designate sets of labels that have special meanings, such as service class. Traditional ATM and Frame Relay networks implement CoS with point-to-point virtual circuits, but this is not scalable for IP networks. Placing traffic flows at the edge into service classes enables providers to engineer and manage classes throughout the network.

If service providers manage networks based on service classes, not point-to-point connections, they can substantially reduce the amount of detail they must track and increase efficiency without losing functionality. Compared to per-circuit management, MPLS-enabled CoS provides virtually all of the benefit with far less complexity. Using MPLS to establish IP CoS has the added benefit of eliminating per-VC configuration. The entire network is easier to provision and engineer.


Generalized MPLS

34. What is "Generalized MPLS" or "GMPLS"

From "Generalized Multi-Protocol Label Switching Architecture" "Generalized MPLS extends MPLS to encompass time-division (e.g. SONET ADMs), wavelength (optical lambdas) and spatial switching (e.g. incoming port or fiber to outgoing port or fiber)."

GMPLS represents a natural extension of MPLS to allow MPLS to be used as the control mechanism for configuring not only packet-based paths, but also paths in non-packet based devices such as optical switches, TDM muxes, and SONET/ADMs.

35. What are the components of GMPLS?

GMPLS introduces a new protocol called the "Link Management Protocol" or LMP. LMP runs between adjacent nodes and is responsible for establishing control channel connectivity as well as failure detection. LMP also verifies connectivity between channels.

Additionally, the IETF's "Common Control and Measurement Plane" working group (ccamp) is working on defining extensions to interior gateway routing protocols such as OSPF and IS-IS to enable them to support GMPLS operation.

36. What are the features of GMPLS?


GMPLS supports several features including:

Link Bundling - the grouping of multiple, independent physical links into a single logical link

Link Hierarchy - the issuing of a suite of labels to support the various requirements of physical and logical devices across a given path

Unnumbered Links - the ability to configure paths without requiring an IP address on every physical or logical interface

Constraint Based Routing - the ability to automatically provision additional bandwidth, or change forwarding behavior based on network conditions such as congestion or demands for additional bandwidth

37. What are the "Peer" and "Overlay" models?

GMPLS supports two methods of operation, peer and overlay. In the peer model, all devices in a given domain share the same control plane. This provides true integration between optical switches and routers. Routers have visibility into the optical topology and routers peer with optical switches. In the overlay model, the optical and routed (IP) layers are separated, with minimal interaction. Think of the overlay model as the equivalent of today's ATM and IP networks, where there is no direct connection between the ATM layer and the IP routing layer.

The peer model is inherently simpler and more scalable, but the overlay model provides fault isolation and separate control mechanisms for the physical and routed network layers, which may be more attractive to some network operators.

38. What is the "Optical Internetworking Forum"?

The Optical Internetworking Forum (OIF) is an open industry organization of equipment manufacturers, telecom service providers and end users dedicated to promote the global development of optical internetworking products and foster the development and deployment of interoperable products and services for data switching and routing using optical networking technologies.

An Introduction to the Optical Internetworking Forum White Paper can be found at http://www.oiforum.com/

Voice over MPLS

39. Can voice and video traffic be natively encapsulated into MPLS?

Yes. The MFA Alliance has released a bearer transport implementation agreement which can be viewed at http://www.mfaforum.org/VoMPLS_IA.pdf.

MPLS Management

40. How are MPLS networks managed?

Currently, most MPLS implementations are managed using CLI. Tools such as WANDL's NPAT simulator allow MPLS networks to be modeled prior to deployment.

Several companies in the operational support systems product space have introduced tools designed to ease MPLS network management and automatically provision LSPs.

41. Are there any MPLS-specific MIBs?

Yes. Several internet drafts have proposed creating MPLS-specific MIBS.

42. Is there open source MPLS code to test MPLS?

Yes. Several open source implementations of MPLS currently exist.


MPLS Training

43. What shows and conferences provide information on MPLS?

Several conferences are devoted to, or include presentations on MPLS. These include:

"MPLScon" held each May in New York City
"MPLS World Congress" held each February in Paris
"MPLS 200x" held each fall in Washington D.C.

MPLS Interoperability Testing

44. Are there any labs that are performing MPLS interoperability testing?

Several groups and organizations conduct MPLS interoperability testing, including:

The University of New Hampshire Interoperability Lab has set up a MPLS Consortium for vendors to test the interoperability of their products and to support MPLS standards development. More information is available on their web site at http://www.iol.unh.edu/consortiums/mplsServices/.

Isocore in Fairfax, VA conducts interoperability testing and hosts the "MPLS 200x" annual event each fall in Washington D.C.

The MFA Forum has conducted several GMPLS interoperability testing events at conferences such as SuperComm and Next Generation Networks.

EANTC AG is a vendor-neutral network test center located in Berlin, Germany and conducts independent MPLS interoperability testing

Photonic Internet Lab is supported by the Government of Japan and provides testing and simulation efforts for GMPLS development.

Thursday, June 28, 2012

Google's First Tablet - NEXUS 7


Google (Nasdaq: GOOG)'s first own-brand tablet is following in the footsteps of the popular low-cost Amazon.com Inc. (Nasdaq: AMZN) Kindle Fire rather than attempting to tussle for the high-end with Apple Inc. (Nasdaq: AAPL)'s iPad.

The search giant revealed its Nexus 7 tablet at its developers' conference in San Francisco Wednesday afternoon. The tablet is made for Google by AsusTek Computer Inc. and will start at US$199 for an 8GB model.

Specifications for the tablet include:

  • Google's latest Android 4.1 Jelly Bean operating system
  • An Nvidia Corp. (Nasdaq: NVDA) Tegra 3 quad-core processor
  • A 7-inch display
  • "Android Beam" Near-Field Communication (NFC) support
  • Up to eight hours of active battery use
  • "Tons of free cloud storage"
  • Tight integration with Google apps

Here's what Google has to say about its latest hardware move:

Wednesday, June 27, 2012

Introduction to VPLS – Animation by Alcatel-Lucent: this is a very easy to understand explanation of VPLS Networks.


Virtual Private Networks have evolved considerably. Today, VPLS-based VPNs enable service providers to offer enterprise customers the operational cost benefits of Ethernet with the predictable Quality-of-Service characteristics of MPLS. This movie gives a clear introduction to VPLS.


What is VPLS? - Explania


Monday, June 25, 2012

Benefits Of Hosted PBX Systems


A Hosted PBX generally involves installing some sort of remotely attachable handsets (usually VoIP based, in today's world, but "Centrex" service does this well in the analog world, too) at a site, then putting the 'brain' of the PBX, along with it's primary PSTN connections, into a different site, usually controlled by a vendor.

There are several pros and several cons involved, but a lot of the answer comes with your level of comfort with the vendor, and your level of cost involvement.

In the end, the golden rule of telephony development is this .... "Do Not Mess With Dial Tone". Users depend on it, every day- and businesses live and breathe by their phones. If you find a reliable, cost-effective solution, then that makes a good fit- any chance of unreliability, and you are risking the business.


In a hosted PBX, there is one extra moving part- the link between your site and the provider- that you *must* ensure. If that should fail, all your phones will be down (unless you have some sort of backup line arrangement and local PBX hardware to fall back on).


Specifically, here are some pro's and con's of hosted PBX's:

Pros .....

1. Generally, a hosted PBX is less expensive to the end user. Most hosted PBX vendors work out some sort of 'pay per minute' or 'pay per handset' plan, and, since they put many different customers on their enterprise-grade PBX at a central site, are able to pass on savings. For a small site, it's very hard to beat a hosted provider's cost model, unless you've got a large number of handsets, or specific application needs that drive up the price.

2. Maintenance is built in. Hosted PBXs today generally use VoIP hardware- so handset Move,Add,Change work is done by the end user with no more difficulty than moving a PC. The wiring at your site is your LAN, and LANs have a generally high reliability factor. Most changes, therefore, are done at the PBX level- and the vendor can again leverage economy of scale- the changes are generally simple and done via web browser, so they can include the 'maintenance' for free, and get some high-level support on every problem.

3. High reliability of trunk lines, and generally lower cost per minute. Again, through economy of scale, the provider can almost always get a better per-minute rate than a small office can negotiate with the local telco, and can easily afford to have redundant call and network/PSTN paths by sharing them with multiple customers. Having a trunk failure from a hosted PBX center would be horrific, affecting potentially thousands of customers- the vendor simply won't let that happen. (or shouldn't).

Con's .....

1. Loss of flexibility. The hosted PBX company makes their money by providing a fixed package of services and devices to it's customers. If you want telephony applications that aren't on the 'menu', the answer may very well be 'no'. Don't like your handsets- you can't change them (beyond a range). Need to change your long distance provider? Forget it- you won't have one to select. In some cases, the vendor will own your number, making it difficult to change vendors without changing your business telephone number.

2. Reliability. As I mentioned above, you're now dependant upon your local LAN and the WAN connection between your site and the vendor. If you've got a small office on a tight budget, failures of either one may take a few hours- or days- to resolve, as you won't have the local resources to apply to them. On top of this, the cheapest WAN method is seen as the Internet- and the Internet itself may not be reliable. Call quality may suffer if your office is doing a lot of Internet traffic, or your Internet provider is having problems.

3. Business reliability. The best rates for hosted PBX companies come from startups. This has it's ups and downs- sometimes, that can be great, providing personal service at a decent price. But, a lot of startups fail- and when they do, your phones go with them. Check your contracts carefully.

Hosted IP PBX's, Enterprise Solutions for Small To Medium Sized Businesses

With all of the different flavors of VOIP in the marketplace today, products like Vonage do not bode well for an office with more than a few employees. The other route is purchasing a VOIP enabled key system. Well, I would like to mention a third option that is often overlooked, the Hosted VOIP PBX.


The Hosted IP PBX is a great solution for many mid-sized companies looking for the advantages of an enterprise type of phone system but not wanting to spend the capital to acquire one. Plus you get many more inherent advantages a traditional phone system can not offer you. Let's look into detail what these advantages are in comparison to a traditional key or PBX phone system.


First, let's do a comparison of features between the different solutions. Typically, when buying a traditional system you are limited by the expansion of how many cards and ports the unit can acommodate and when making any changes like moves or adds, you need to pay a technician to come out and service the unit. With a hosted system the technology resides in the phone companies network, so any type of adds, moves or changes can be done easily by you through a web browser and a secure login website, this removes the cost of hiring someone to do this. It also adds tremendous power for redundancy and backup situations. For example, you have an awesome weather event such as a snow storm (which being from Buffalo is my point of reference) and most people can not make into the office, you can get on your broadband connection at home, login and forward all of your calls to any number in the world, like your cell phone or home phone. This way you still will be able to be productive.

Additionally, with a traditional phone system, you have to purchase additional software to get added features. In a hosted environment the upgrades are automatically propagated down to your phone giving you things like unified messaging (getting voicemail files on your computer and using your address book to dial phone numbers), hot desking (being able to go between offices and logging onto any phone and it will not only route all of your calls to you automatically, but will also bring over your speed dials and all of your custom phone settings to that other office phone) and any new enhancements that the software developer creates.

Finally, cost. A traditional system purchase includes phones, separate phone cabling, the box in the phone closet, a voice mail system and any additional cards or software you need to give it the features you want such as auto attendant, IVR functionality, etc. A hosted system only requires data cabling, the phones, a switch and a router. This allows approximately a 40% reduction in upfront expense versus traditional phone systems and at least a 60% reduction against a VoIP system that resides on site.


In summary, a hosted VOIP system is an excellent choice for the mid sized business due to the fact that they can achieve enterprise functionality at a fraction of the cost while also inheriting redundancy not heard of in this market segment. But, you should always consult with a professional to help guide you through advantages and disadvantages of the choices currently out there and find the solution that best suits your businesses current and future needs.

Saturday, June 23, 2012

Huawei has High Hopes in Managed Services

To Huawei Technologies, managed services is more than just a market segment, it’s an opportunity to accelerate sales and market share, and it wants a bigger share of that growing and lucrative market.


Around the world, Huawei is snapping up major managed services contracts with telecommunications companies and service providers. The foundation of these engagements is the technology has become too complicated for these companies to manage on their own. Outsourcing management and administration to Huawei frees the subscriber to focus on core business development and innovation.

Already, Huawei has more than 240 major managed services contracts in 60 countries. Most recently, Huawei capture contracts with Sunrise, Telefonica UK and SingTel. According to the China-based company, its managed services growth is 70 percent over the last six years.

The strategy and value proposition is essentially the same as Cisco’s Smart Net Total Care, which aims to provide Cisco and partner-led technology implementation and managed services support to telecom companies and service providers. Cisco is already generating more than $8.7 billion a year, and sees even greater room for expansion.


While many question the viability of managed services in the cloud era, companies like Cisco and Huawei are seeing tremendous opportunity in moving beyond offloading to true business augmentation. And they’re not the only ones. Numerous solution provider and systems integrators, such as Dimension Data are making similar moves.

Even though the Chinese company is one of the world’s largest equipment manufacturers, it remains mostly a fabrication provider to other tech vendors rather than a fulfillment company. It wants to break out of white box manufacturing model and become a true IT vendor on the global stage. Managed services on the service provider level is seen as an accelerator to that goal.

What Huawei lacks is a true managed services strategy to enable partners down the channel stack to deliver services to midmarket and SMB accounts. Cisco has been doing managed services enablement for years, providing MSPs access to its technology stack and granting price protection to MSPs that standardize on their equipment.

As Huawei dives deeper into managed services it may find the lure of the midmarket channel too tempting to resist, and could develop programs that enable others to provide managed services on its platform.

IMS - IP Multimedia Subsystem E-Learning

In this module you will learn about the Alcatel-Lucent IP Multimedia Sub System: the business drivers behind, the principles of operations and the solutions offered by Alcatel-Lucent.


IMS is an open standardized architecture for accessing any kind of multimedia service or telephony connection from any kind of access.



IMS - IP Multimedia Subsystem - Explania

Wednesday, June 20, 2012

Cisco is re-branding the ESN software and calling it WebEx Social

Cisco Systems is extending the functionality of its Quad enterprise social networking (ESN) software through integration with Microsoft Office applications and with email clients, including Microsoft Outlook, the company is announcing on Tuesday.


Cisco is also re-branding the ESN software, dropping its Quad name and calling it WebEx Social, as the company seeks to give its social collaboration products a uniform brand using the WebEx name.

"Creating a consistent [collaboration] brand and experience under WebEx is a necessary thing for them to do," said industry analyst Zeus Kerravala, founder and principal analyst at ZK Research.

Cisco is also placing under the WebEx umbrella its Callway cloud-hosted telepresence service, the original WebEx online meeting and Web conferencing service, and the cloud-based Connect IM and presence service. Callway will now be known as WebEx Telepresence, while WebEx will be called WebEx Meetings, and Connect is being renamed WebEx Messenger.

"There's no doubt that Cisco had too many [collaboration] platforms. That's how they grew the business, through acquisitions," Kerravala said.

WebEx Social, like other ESN applications from companies like IBM, Jive Software, Tibbr and Yammer, lets organizations offer their employees Facebook-like and Twitter-like functionality but adapted for a workplace setting so that they can, in theory, communicate and collaborate more effectively.

ESN software is designed to complement traditional workplace communication and collaboration tools, like email, IM and Web conferencing with employee profiles, activity streams, microblogging, discussion forums, wikis, brainstorming software, recommendations, ratings, joint document editing and annotation, tagging and links.

Interest in ESN software has been growing in recent years, and Forrester Research recently forecasted that spending on these products will grow at a compound annual growth rate of 61 percent through 2016, a year in which this market will reach US$6.4 billion, compared with $600 million in 2010.

Kerravala predicts that ESN will be one of the biggest trends in IT over the next five years. "It will change the way people work," he said.

ESN will be used for many communication and collaboration tasks for which email is ineffective. "It will create a much more interactive way of working than what you have now with email," he said.

Cisco, which is making the announcement at the Enterprise 2.0 conference in Boston, will now let WebEx Social users collaborate on, share and jointly edit Word, PowerPoint and Excel documents.

In addition, users will be able to perform certain WebEx Social actions, like creating, emailing and posting updates to their activity streams, from within Microsoft Outlook and SMTP-compliant email clients.

4 Essential steps for Successful Incident Management

It never hurts to go back to basics. Recently, we were surprised at the confusion of some organizations about the process of incident management, so we thought – why not to put a quick incident management primer down on paper?


For successful incident management, first you need a process – repeatable sequence of steps and procedures. Such a process may include four broad categories of steps: detection, diagnosis, repair, and recovery.

1 – Detection

Identification Problem identification can be handled using different tools. For instance, infrastructure monitoring tools help identify specific resource utilization issues, such as disk space, memory, CPU, etc. End user experience tools can mimic user behavior and identify users’ POV problems such as response time and service availability. Last but not least, domain-specific tools enable detecting problems within specific environments or applications, such as a database or an ERP system.

On the other hand, users can help you detect unknown problems that are not reported by infrastructure or user behavior monitoring tools. The drawback with problem detection by users is that it usually happens late (the problem is already there), moreover the symptoms reported may lead you to point to the wrong direction.

So which method should you use? Depending on your environment, the usage of the combination of multiple methods and tools would be the best solution. Unfortunately, no single tool will enable detecting all problems.

Logging events will allow you to trace them at any point to improve your process. Properly logged incidents will help you investigate past trends and identify problems (repeating incidents from the same kind), as well as to investigate ownership taking and responsibility.


Classification of events lets you categorize data for reporting and analysis purposes, so you know whether an event relates to hardware, software, service, etc. It is recommended to have no more than 5 levels of classification; otherwise it can get very confusing. You can start the top level with something like Hardware / Software / Service, or Problem / Service request.

Prioritization lets you determine the order in which the events should be handled and how to assign your resources. Prioritization of events requires a longer discussion, but be aware that you need to consider impact, urgency, and risk. Consider the impact as critical when a large group of users are unable to use a specific service. Consider the urgency as high when the impacted service is of critical nature and any downtime is affecting the business itself. The third factor, the risk, should be considered when the incident has not yet occurred, but has a high potential to happen, for example, a scenario in which the data center’s temperature is quickly rising due to an air conditioning malfunction. The result of a crashing data center is countless services going down, so in this case the risk is enormous, and the incident should be handled at the highest priority.

2 – Diagnosis

Diagnosis is where you figure out the source of the problem and how it can be fixed. This stage includes investigation and escalation.

Investigation is probably one of the most difficult parts of the process. In fact, some argue that when resolving IT problems, 80% of the time is spent on root cause analysis vs. 20% that is spent on problem fixing. With more straightforward problems, Runbook procedures may be very helpful to accelerate an investigation, as they outline troubleshooting steps in a methodical way.

Runbook tip: The most crucial part of the runbook is the troubleshooting steps. They should be written by an expert, and be detailed enough so every team member can follow them quickly. Write all your runbooks using the same format, and insist on using the same terms in all of them. New team members who are not familiar yet with every system will be able to navigate through the troubleshooting steps much more easily.

Following the runbook can be very time consuming and lengthen the recovery time immensely. Instead, consider automating the diagnostic steps by using run book automation software. If you build the flow cleverly and weigh in all the steps that lead to a conclusion, automating the diagnostics process will give you quick answers, and help you decide what your next step is.

Escalation procedures are needed in cases when the incident needs to be resolved by a higher support level.

3 – Repair

The repair step, well… it fixes the problem. This may sometimes involve a gradual process, where a temporary fix or workaround is implemented primarily to bring back a service quickly. An incident repair may involve anything from a service restart, a hardware replacement, or even a complex software code change. Note that fixing the current incident does not mean that the issue won’t recur, but more on that issue in the next step.

In this case too, straightforward repairs such as a service restart ,a disk cleanup and others can be automated.

4 – Recovery

The recovery phase involves two parts: closure and prevention.

Closure means handling any notifications previously sent to users about the problem or escalation alerts, where you are now notified about the problem resolution. Moreover closure also entails the final closure of the problems in your logging system.

Prevention relates to the activities you take, if possible, to prevent a single incident from occurring again in the future and therefore becoming a problem. Implement two important tools to help you in this task:

RCA process (Root Cause Analysis) The purpose of the RCA process is to investigate what was the root cause that led to the service downtime. It is important to mention that the RCA process should be performed by the service owners, who are not necessarily the ones who solved the specific incident. This is an additional reason why incident logging is so important – the information in the ticket is crucial for this investigation process.

And finally, Incident reports – while this report will not prevent the problem from occurring again, it will allow you to continually learn and improve your incident management process.

Tuesday, June 19, 2012

Ethernet Patch Cable Colour Codes & Bestpractices

Ethernet Patch Cable Colour Codes


 
Keeping data from getting crossed in a data center can be a pain. Below are some of the standards followed in Data Centers
  • blue - most common so workstations or generic servers.
  • red - critical systems. Sometimes used for building fire systems
  • yellow - less critical system.
  • orange - cabels that go off to other racks
  • green - where the money flows for e-commerce systems.
  • black - VoIP systems since the phones came with blakc patch leads
  • white - video camera network
  • pink - used for rs-232 serial cables
  • purple - used of isdn type links
  • tan - telephone lines  
This is the current list of colors of ethernet cables that can find be found in general ind DC or have seen:

blue
light blue (rare but commonly used on cisco cables)
fluorescent blue (even rarer)
red (many of these are cross over cables)
yellow (this was a standards approved color for cross over cables)
cisco cable yellow
orange
pink
fluorescent pink
green
fluorescent green (never seen this buts its in catalogs)
black (easy to confused with power cords)
white (sort of rare)
light gray
dark gray (rare)
silver (rare)
tan/beige (common in cat 3 patch cables)
purple/violet (they are different but when you order one you get the other)
fluorescent violet (very rare)

 
One thing to consider is about 15% of all males are slighly color blind and only about 10% think they are. Many colors look the same but often times a color blind person can easliy tell the difference between say tan cables and beige cables but can't tell the red from the green.

 
Color codes for fiber (fibre?)

  
Orange - multimode
Yellow- Single mode
gray - could be either but tends to be single mode
light blue - could be either
Color codes for fiber jackets
Blue - Stright cut - fiber joint is perpeneducalr at 90 degrees
Green - Angled cut - fiber joint is angled slightly

Note that buildings will often have these colors:

 
red - fire alarm cables
white - cheaper fire alarm cables
blue - who knows? Could be alarm, fire, hvac, or data
tan - same as blue but older

For -48 volt systems you can get:

 
red - ground.
black - negitive 48V but can routinely be -56V
blue - could be the same as red or black but tends to be the same as red on a differenc circut

 
Note that -48 Volt systems tend to be able to provide massive amounts of unfused current. These system will often have enough capacity to boil the metal in tools.

 
That describes the outer jackets. Inside cables like power you can have:

 
Live power from selected places around the world:

 

red (power Au/UK)
brown (old for AU/NZ/UK)
yellow (old phase 2 UK)
blue (old phase 3 UK)
blue (phase 3 in AU)
blue neutral in Europe
black (power in Europe)
Gray (old IEC phase 3)
gray (power Europe)
gray (neural in US/Japan)
white (neutral in US)
white (pahse 2 in Au)
white (swtich return in AU/UK)
green/yellow - Ground most places
green or yellow but not both (power IEC 60446 and a bad idea)
green (ground in the US according to parts of the Elec code)
green (Never ground in the US according to parts of the Elec code)
bare copper (ground in the US or death)

 
The color codes for ships make much more sense and are about as uniform. For example blue is used for compressed air on US registered ships yet blue is for water on UK registered ships.

 

Enabling good governance in a Managed Services contract

Managed Services 2.0 demands good governance to be successful. More than half of all outsourcing engagements fail: 20% in the first 2 years and a further 30%+ are not renewed (Gartner Group Survey Nov 2011).


Good governance is about control over processes and control over information. When done well it enables the customer to communicate and then measure its service provider’s performance against its expectations. It clearly establishes the framework within which an operator expects its managed services providers to innovate, and it demonstrates the alignment (or otherwise) between both businesses.

It has the additional advantage of building evidence-based cases for targeted investment in the network by service providers; and it clearly illustrates the impact of poor network performance on meeting business objectives.

Good governance ensures that attention is being paid all day every day to the details that matter:

o Measuring things
o Understanding what important
o Identifying issues
o Spotting potential problems and assigning actions
o Ensuring accountability is set and responsibility taken
o Change is planned for (and recognized when it happens)
o The correlation, or more simply, the connection between things is thought about and acted on.

An enterprise with strong governance will be able to look at its most important processes such as project delivery or customer service and understand clearly the risk of the process failing.

In Managed Services contracts the risk of a process failing can lead to penalties, clients not renewing eight-figure contracts, and even legal liabilities due to not hitting certain KPIs or SLAs.

But in those contracts it is not just the risk of process failure that can result in contracts not being renewed – poor control of information can lead to the same result.

Handling governance consistently well is hard, painstaking, essential work, and technology is the only answer.

Network Performance: Sprint

Monday, June 18, 2012

Cisco announces Cisco Cloud Connected Solution

Cisco expands the Aggregation Services Router (ASR) platform with the Cisco ASR 1002-X router.


Cisco Accelerates Businesses' Transition to the Cloud with Innovative Cloud Connected Solution
Cisco Cloud Connected Solution Enables Users to Connect to the Cloud with Confidence

According to the 2012 Cisco® Global Cloud Networking Survey, the majority of IT decision makers cite a cloud-ready network as the biggest requirement to enable the migration of business applications to the cloud.


To meet these challenges, Cisco today announced the Cisco Cloud Connected Solution, a new product solution that delivers cloud-enabled routing and wide area network (WAN) optimization platforms, along with Cloud Connector software and services, enabling users to securely connect to cloud services.

The Cisco Cloud Connected Solution helps organizations easily and securely take advantage of cloud computing and accelerate the deployment of cloud services while delivering an optimal user experience at the lowest cost.

The Cisco Cloud Connected Solution is comprised of:


Cloud Connectors: new software embedded into the Cisco Integrated Services Router (ISR) G2 platform along with services that improve the performance, security and availability of cloud applications. The open architecture of the Cloud Connectors allows service providers and channel partners to develop third-party Cloud Connectors to help them deliver differentiated services to their customers.


• Cloud-enabled platforms: the new Cisco Cloud Services Router (CSR) is a virtual router that enables customers to extend their virtual private networks (VPNs) into the cloud. Cisco is also expanding the Aggregation Services Router (ASR) platform with the Cisco ASR 1002-X router, and introducing the Cisco UCS® E-Series Server Modules on the ISR G2, delivering lean branch solutions by hosting multiple third party services on a single branch platform.

• Innovative cloud services: new capabilities added to existing routing and WAN optimization platforms to better support cloud computing, including the new Cisco Application Visibility and Control (AVC) technology integrated into the Cisco ISR and ASR platforms to optimize the delivery and troubleshooting of cloud applications on the network; and Cisco UCS® E-Series Server Modules on the ISR G2. New Cisco AppNav technology intelligently clusters Cisco's Wide Area Application Services (WAAS) physical and virtual appliances into a single resource pool managed by a central controller.

With more than 500,000 worldwide customers and 78.6 percent of the global enterprise router market share,(1) Cisco is uniquely positioned to lead the transformation of the WAN with its enterprise routing platforms and the introduction of the Cisco Cloud Connected Solution.

The Cisco Cloud Connected Solution is a foundational element of the Cisco Cloud Intelligent Network, which together with Cisco Unified Data Center and Cisco Cloud Applications and Services enables the world of many clouds. In addition, Cisco is introducing enhancements to existing routing, security and WAN optimization solutions, as well as new features and models on the ASR, ISR G2 and WAAS appliances.

The following includes additional information on the featured new products in the Cisco Cloud Connection Solution portfolio:

• Cloud Services Router (CSR) 1000v: This virtual router securely enables enterprises to extend and control various facets of their enterprise network to clouds while providing cloud service providers (CSPs) with increased revenue opportunities though an on-demand, flexible "network-as-a-service" model.


• Cisco WAAS 5.0 with AppNav: Enables customers to virtualize WAN optimization resources by pooling them together into an on-demand resource that includes the best latency performance. Cisco WAAS 5.0 with AppNav provides an elastic "scale-as-you-grow" application deployment model that enables enterprise-wide application acceleration and performance visibility. Cisco AppNav can be installed as a hardware module on existing Cisco Wide Area Virtualization Engine (WAVE) appliances or as software on the new Cloud Services Router (CSR).

• Cisco ASR 1002-X router: Enables organizations to increase their WAN performance by up to seven times and enhance deployment agility with an easy "pay-as-you-grow" performance upgrade model that delivers up to 36 Gbps performance with services turned on for the WAN Edge, Internet Edge and Managed Services environments. Cisco is also introducing the ESP-100G, a forwarding processor for the ASR 1000 Series in cloud edge deployments, in addition to FlexVPN on ASR 1000, for highly secure IPsec site-to-site and remote access VPN.

• Cisco is also introducing Cloud Connectors for Cisco Hosted Collaboration Services (HCS) survivability, Cisco ScanSafe Web security service and the third-party CTERA Cloud Storage Connector on the Cisco Unified Computing System™ (Cisco UCS®) E-Series.

All products are currently available, with the exception of the CSR 1000v and Cisco UCS E-Series on ISR G2, which are scheduled to begin shipping Q4 2012. The ASR 1002-X router is scheduled to begin shipping in September 2012.

More information about cloud connectors can be found here -
http://www.cisco.com/en/US/prod/collateral/routers/ps10536/white_paper_c11-706801.html




Supporting Quotes

• Praveen Akkiraju, senior vice president and general manager, Cisco Services Routing Technology Group: "As businesses are driving the rapid adoption of cloud based services, routing platforms and the WAN have become a strategic control point to provide an optimal user experience across the cloud. The Cisco Cloud Connected Solution redefines the WAN architecture with key innovations that leverage the network intelligence as a critical link in cloud deployments by putting more functionality into traditional enterprise routing, allowing customers to connect to the cloud with an optimal user experience."

• Gartner: Bjarne Munch, Principal Research Analyst: "Network infrastructure must rapidly adapt to fully support future enterprise communications systems, which will be highly virtualized, intelligent and application-aware. The impact of new infrastructure and application strategies on the network cannot be overstated. To prepare for these changes, the enterprise network must be modernized, and enterprises must plan for the retirement of legacy technologies." ("IT Market Clock for Enterprise Networking Infrastructure, March 2012")

VoIP – Per Call Bandwidth Consumption


One of the most important factors to consider when you build packet voice networks is proper capacity planning. Within capacity planning, bandwidth calculation is an important factor to consider when you design and troubleshoot packet voice networks for good voice quality.


This document explains voice codec bandwidth calculations and features to modify or conserve bandwidth when Voice over IP (VoIP) is used.


http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a0080094ae2.shtml

Saturday, June 16, 2012

Operations Plan: The Definitive Guide

Want to write an Operations Guide? This tutorial explains how to write your first operations manual. It helps you get started, suggests how to format the document, create the table of contents and what else you need to include in a sample plan.


Operations Guide: Definition

 
What is an Operations Guide?

Definition: An Operations Guide provides instructions to System Administrators to help them manage updates, maintain computers, and run reports. It also offers troubleshooting information.

Operations Guide: How to Start

 
The difficulty in writing an Operations Guide, like any technical document, is to know where to start. Here’s a suggested approach:

Who – define who will read the Operations Guide, for example, is it developers, testers or users?

 What – identify the most important tasks you need to write to help sys admins. How do you find out? Ask the people who will use it and capture their most urgent needs.

 Where – identify where it will be used. Will it be used online, printed out or read on a mobile device.

 When – is this document used in an emergency? Or is it used in less pressurized settings? How does this affect the content and the way it’s structured? When is it due for completion?

 How – create a list of who will help you write the document, provide answers, review it, and then sign off.

 Others –you’ll also need things like style guides and other supporting documents.

 
While this may seem a lot, focus on who will use it first and what problems you’re going to solve.


 
Operations Guide: Define Your Audience

 
Who’s going to read your Operations Guide? Before you start writing perform audience analysis.

Identify the main user types (sys admins, testers, developers)
  • Pain points
  • Urgent needs
  • Expectations  
This helps structure the content. For example, if you know that developers require in depth information on how to setup a server, then provide very exact step by step instructions.

  
Operations Guide: Create a Format

 
If this is your first Operations Guide, use a template to get started.

  
While the template may not be perfect, it identifies the key sections for the table of contents. Once you have these in place, fill in the gaps and complete the document section by section.

 
Formatting the document also involves:

 
Updating the cover page so it displays the version number, official document title, publication date, and possibly author.

 
Updating the header and footers, for example, by including the section or chapter.

 
Using a consistent font (or series of fonts) across the document. When you copy and paste from different documents, it’s easy for rogue fonts to enter the document. Use styles to control this.

 
Operations Guide: Provide an Outline

 
When the user first opens the document, provide them with an outline of what’s included in the guide. This helps orient the reader so they understand how the operations guide is structured and how information will be presented.

 

Here’s a sample outline:

 

Title: IBM WebSphere 9.0 SP3 Operations Guide

 
Description: This guide describes the major tasks involved in administering and troubleshooting IBM WebSphere 9.0 SP3.

 

 
In this guide you will find:

 
Administering IBM WebSphere 9.0 SP3

Troubleshooting IBM WebSphere 9.0 SP3

Additional Resources for IBM WebSphere 9.0 SP3

Appendix A: Uninstalling the Database

Appendix B: Uninstalling SP3

Appendix C: Settings for Web Services

Appendix D: Permissions

Appendix E: Performance

Appendix F: Database Maintenance

Appendix J: Setup Codes

 
Administering IBM WebSphere 9.0 SP3

 
This section contains background information and procedures for performing the major tasks involved in administering IBM WebSphere 9.0 SP3.

 
In this guide:

 
Overview of IBM WebSphere 9.0 SP3

Managing IBM WebSphere 9.0 SP3

Reports in IBM WebSphere 9.0 SP3

Securing IBM WebSphere 9.0 SP3

 
Once you have outlined the high level sections, you can transition into the document, for instance:

 
“You can use IBM WebSphere 9.0 SP3 to manage downloading database updates from IBM and distributing them to computers in your network.”

 
Operations Guide: Table of Contents

 
Now that we have this in place, create the table of contents.

 
Remember that this will vary depending on your setup but the following sections apply to most Operations Manuals.

 
Introduction

 
Introduce the document by describing the target audience and the systems which will be covered by the Operations Guide.

 

  • System overview – explain how the system functions at a high level, for instance, process credit cards, record transactions, or store files.
  • Authorized use – identify who is authorised to use the application, for instance developers, testers, and sys admin.
  • Points of contact – identify the application owner and how this person can be contacted
  • Help desk – provide details of who runs the help desk (technical support) and different levels of support, if applicable.
  • Hours of operations – identify when the application is supported, eg 9-5, and when it is scheduled for maintenance, eg every quarter.  
System overview

 
Next, provide a brief description of the operation of the system, including its purpose and uses.

  •  System operations
  •  Software inventory
  •  Software interfaces
  •  Resource inventory
  •  Report inventory
  •  System restrictions
  • Hardware
 
Operations team

 
Outline the operations team in an org chart. If useful, provide a leadership chart and identify the leadership structure within the team, such as managers, team leaders and domain experts.

  
It also highlights reporting relationships and reflects the management and leadership structure of the team:
  • Organization chart
  •  Leadership chart
  •  Key roles & responsibilities
  •  Anticipated change

 
Operations schedule

 
Here we describe how activities are scheduled. Tasks can generally be grouped into daily, weekly and monthly schedules.

  
Depending on the circumstances, additional administrative tasks may need to be implemented, and some tasks may occur more or less often or not at all.
  • Daily tasks
  •  Weekly tasks
  •  Monthly tasks
  •  Backup schedule
  •  System runs

 Describe the system runs used by Operations personnel when:

  •  Scheduling operations
  •  Assigning equipment
  •  Managing input and output data, and
  •  Restart/recovery procedures
Provide detailed information on how to execute the system runs. Provide a run identifier, i.e. a unique number, to reference each run.

 

 System Restores

 
Finally, describe the steps required to restore operations. Describe:

 

  • Possible failure scenarios and
  • Steps to restore operations and data in that scenario
 Update Classifications

 
Another area to consider is update classifications. These represent a specific type of update for different products.

 

The following table lists samples update classifications.

 
  • Critical updates – Fixes for problems that resolve critical errors.
  • Security updates – Fixes that address security issues.
  • Definition updates – Updates to definition files.
  • Feature packs – New features rolled into products at the next release.
  • Service packs – Sets of hotfixes, security updates, updates since product release
 
Conclusion

 
In this Operations Plan tutorial we’ve looked at how to identify our readers, how to format the document, and how to create the table of contents. What’s missing is how to write the Operations Plan itself.

 

Microsoft upgrading switches to Arista Networks

Courtesy: Bradreese

"Korea Telecom is one customer that has now deployed hundreds of switches after the initial trial; the company saved over 80% in capex by deploying Arista's solution."

"We believe Microsoft may also be upgrading its network based on Arista switches.

"Flattening the network and simplifying the data center offer significant cost benefits and vendors including Juniper, Brocade, Cisco, and others have set forth various fabric technologies. We also have non-fabric fabric contenders, such as Arista, which believe that a standards-based approach is the way to get scale in a network. On top of that, the OpenFlow movement and the broader opportunities for Software Defined Networks promise to address the network problem from a different angle and potentially reduce the requirements for a full fabric deployment. For its part, Arista remains adamant that no fabric is required with Ethernet set to get only better from here."

"Arista says that SDNs can be implemented using controllers of distributed network controllers, each with its pros and cons. Initial applications for OpenFlow, according to Arista, include dynamic packet redirection for network tap aggregation, lawful intercept, and network segmentation deployments.
"And with more APIs in development, OpenFlow doesn't necessarily have to be the only method to deploy SDNs. Other options include OpenStack, Netconf, XMPP, VMware, or even future hypervisors, which all promise some topology agnostic network virtualization for applicant and workload mobility optimization. So while some investors may have concluded that OpenFlow equates to SDNs, others believe OpenFlow is just one option to implement an SDN."

"Arista typically displaces other vendors by starting with a two-to six-switch initial deployment. When customers decide that they would like to go ahead with Arista switches, they either do a rip and replace of entire data centers or grow alongside their existing network with more Arista switches.

"Korea Telecom is one customer that has now deployed hundreds of switches after the initial trial; the company saved over 80% in capex by deploying Arista's solution."

"Virtualization is one of the drivers of data center consolidation and the need for more efficient data centers. Arista has a partnership with VMWare whereby Arista's operating system EOS incorporates software capabilities of VM Tracer. Through VM Tracer, network operators get better visibility into the virtualization layer and have better control of their network protocols.

"Arista's approach to next-generation data center networking is based on its software OS imbedded in its switch technology. The company claims that fabrics are more marketing related than anything else and that Arista's switches already have the capabilities that many of its competitors are claiming to be revolutionary in networking. Arista offers high performance, functionality, low latency, and ease of use on open standards to be competitive with larger competitors in data center switching.

"Arista leverages merchant silicon for its switching technology, unlike other vendors that use their proprietary ASIC technologies. Merchant silicon may be getting more advanced and Arista claims that the company can achieve 6x to 8x better performance using merchant silicon vs. utilizing ASICs. Partnering with its merchant silicon partners, Arista is able to come out with a new technology every 18 months."

Friday, June 15, 2012

Cisco Teams Up With Instructure To Move The World’s Largest IT Classroom To The Cloud

The Cisco Networking Academy is the largest education program in the world you’ve probably never heard of. The Academy, which is Cisco System’s largest and longest-running corporate responsibility program, partners with over 10,000 universities, community colleges, and high schools to teach students networking and ICT skills, preparing them for jobs and higher ed programs in engineering, computer science, and related fields.


More than one million students currently participate in Cisco’s Academies, which operate in colleges and universities across 165 countries and 17 languages. Today, Cisco is announcing that it has awarded Instructure with a multi-year contract to use the startup’s flagship open-source LMS product, Canvas as its learning platform.

For Cisco, Instructure will help it move its global classroom and online IT courses to the cloud, with the additional benefit of Canvas’ open platform that integrates easily with other technologies and web services and provides simplified tools for grading and assessment.

For Instructure, a startup that’s not yet 18 months old, this is a big win, as its Neworking Academy represents one of the biggest education platforms in the world and will help scale Canvas as a platform significantly, allowing it to reach students across the globe.

As a little background for those unfamiliar with the startup: Back in January of 2011, Mozy founder Josh Coates launched Instructure with the goal of disrupting the Learning Management System (LMS) market. Even if the “LMS market” sounds foreign, you’re likely familiar with the market’s ubiquitous and unpopular giants, Blackboard and Moodle, which have long been the bane of disorganized college students everywhere.

When we last checked in with the startup in April of last year, it had just closed an $8 million series B round from a host of notable investors, including OpenView Venture Partners, Epic Ventures, TomorrowVentures, and Tim Draper of Draper, Fisher Jurveston (bringing its total investment to $9M).

Meanwhile, over these last 14 months, Instructure has been growing like a weed. At the end of 2010, Instructure had 4 customers, but since launching Canvas, the startup has been adopted by 189 institutions (including names like Brown University, Auburn University, University of Austin, Texas, and Wharton School of Business) and about 2.7 million users. What’s more, in 14 months, Instructure has gone from 20 employees to over 130.

And now it will be built into Cisco’s Academy, adding the network’s one million students to its growing user base, while delivering a 21st-century education to students who might not otherwise have the resources or cash to enroll in IT and computer science courses.

Although Coates wouldn’t reveal Instructure’s revenues, he did say that, while there’s been a lot of interest in contributing to the startup’s Series C round, the board has chosen not to pursue any further funding for the time being. Apparently, it’s got plenty of reserves in the tank, and having secured this multiyear contract with Cisco, it wouldn’t be surprising to see its expansion continue.


My Blog List

Networking Domain Jobs