Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Showing posts with label MPLS. Show all posts
Showing posts with label MPLS. Show all posts

Friday, November 21, 2014

With MPLS-TP and SDN, a simpler dynamic control plane


MPLS-TP transforms carrier networks for packet switching, but SDN could take it a step further. Combining the technologies, engineers could reduce complexity in the dynamic control plane and gain flexible service creation.


The MPLS Transport Profile (MPLS-TP) standard was developed to address a serious problem facing carriers -- traffic on their transport networks has shifted rapidly from circuit-switched TDM traffic to IP packet data. Now, they've got to support packet switching while continuing to carry legacy traffic with the high reliability of their current networks.

In large part, they've addressed the shift to packet networking with Multi-Protocol Label Switching (MPLS). But MPLS lacks some of the features transport network carriers require.


MPLS was developed to improve the efficiency of IP switched networks. It creates Label Switched Paths (LSPs) and replaces the IP lookup with a quick index of list of labels, replacing IPs' time-consuming, next hop lookup.

MPLS also supports applications such as interactive voice and video that require specified quality of service. Traffic engineered LSPs can be provisioned to create and maintain guaranteed throughput, delay and jitter levels.

But MPLS-TP brings added features -- including maintenance functions and the ability to carry any type of legacy data traffic -- operating over any physical layer technology, supporting both static and dynamic control planes. The MPLS-TP standardization effort was the joint work of the Internet Engineering Task Force ( IETF) and the ITU Telecommunication Standardization Sector (ITU-T), and based on requirements received from carriers.


MPLS-TP OAM functions

Existing transport networks have proven reliable due to the Operations, Administration and Maintenance (OAM) functions provided by current SONET/SDH and optical transport network (OTN) technologies. OAM packets are mixed in with customer data and flow along each data stream. Maintenance nodes along each network path constantly monitor OAM packet arrivals to measure throughput, packet loss rate, delay, jitter, and to detect breaks in network continuity.

When a problem is detected, network managers can then use other OAM functions to test the path. OAM commands can block the path to everything but OAM packets, allowing managers to test segments along the path and to detect the location, and cause of, the defect.

In an MPLS-TP environment, Pseudowire Emulation Edge-to-Edge (PWE3) standards under development by the IETF specify how to emulate a dedicated point-to-point or point-to-multipoint link. Client networks at each end of the pseudowire can continue to exchange legacy data, as they did before the change in transit network technology.

MPLS-TP also supports the GMPLS standards created by the IETF to extend label-switching concepts to a variety of physical layer technologies. By incorporating GMPLS, MPLS-TP meets carrier requirements to operate over technologies including SONET/SDH, OTN and Gigabit Ethernet.


MPLS-TP for both static and dynamic control planes

Carrier network transport paths have typically been set up statically by managers and modified only to support new requirements, such as when the physical network is updated or when there is an outage. Security requirements can also make it necessary for managers to set up a path along a specific geographical route.

MPLS-TP supports both static or dynamic path creation and management. It includes the dynamic control plane mechanisms in the MPLS protocol suite. A dynamic control plane offers advantages by creating and modifying paths without the need for manager intervention.

Benefits of MPLS-TP and SDN

Using SDN in an MPLS-TP environment can reduce complexity in the dynamic control plane and allow for flexible service creation.

In dynamic control plane operations, OSPF-TE, RSVP-TE, LDP, LMP, I-BGP and MP-BGP are required to find available paths, reserve bandwidth to create traffic-engineered LSPs, monitor paths in order to maintain guarantees, distribute labels to routers along a path, and establish routes between autonomous systems.

Each router in the network must have sufficient memory and CPU capacity to support all of these protocols. A software update must be scheduled every time a vendor fixes a defect in one of the protocol modules.

A switch to SDN would remove the need for router-resident protocol implementations. An OpenFlow controller, for example, would issue directives to set up network paths. Applications interfacing to the controller would implement the functions currently provided by the router resident protocol code. Routers would no longer require as much memory or compute capacity, reducing energy requirements and frequent updates.

SDN control would make it possible to create a service that requires capabilities not present in current protocol standards. Engineers could implement services by writing code for the application software interfaced to the OpenFlow controller. Without SDN, adding a feature to a standardized protocol would require a proposal to the IETF followed by a period of discussion and refinement before a new protocol version was issued; an extremely time-consuming effort.

With OpenFlow control, visibility engineers could also create a single optimized path across the boundaries of network segments. The current MPLS-TP control plane creates paths only within the boundaries of the MPLS-TP network itself. Creating an end-to-end service stretching between segments of a client network and connected by an MPLS-TP network requires setting up three individual paths, a path through each client network segment and another through the MPLS-TP link.

As SDN continues to display its benefits in the data center, proponents of extending the concept to transport networks will work with early adopters to test and refine their proposal. If SDN control proves successful, it will add flexibility and simplify transport networks operation.


Thursday, July 4, 2013

What is Entropy Label?


In any network currently load balancing is achieved by Link Aggregation or Equal Cost Multipath (ECMP) mechanism. ECMP is multiple paths with same cost to reach a particular destination. Any path can be used to reach the destination. 

While ECMP load balancing can be per packet, it may result in Jitter or delay and even Out-of-Order packets to ultimate destination. Current ECMP load balancing is flow specific where it will consider Src/Dst IP address, Transport protocol (UDP or TCP), and Src/Dst Port details from the packet, collectively can be considered as KEYS and input the same to load balancing algorithm to get the egress link. 

With MPLS network, Transit LSR in order to get the KEYS for load balancing algorithm may require to perform deep packet inspection. A new idea is proposed to eliminate the need to have Transit LSR to perform deep packet inspection. The Idea is to have ingress LER pull the KEYS from the native packet, input the same to load balancing algorithm and place the resulting value as label known as ENTROPY LABEL and send across the MPLS network. Any LSR along the path can use the, already hashed value in entropy label for load balancing.

Below are few points to remember about Entropy label,

• Will not be used for forwarding decision and is used only to carry load balancing information.
• Will be generated by Ingress LER
• Must be at the bottom of the label stack with Bottom of Stack set to 1
• Must have TTL value set to “0”

Since Entropy label will now be the bottom most label, application label like VPN label (in case of MPLS VPN) or Tuel label (In case of L2VPN) will not be with BoS=1. Any egress (supporting entropy label feature) when receiving MPLS packet with Application label with BoS=0, understands that there is one more label which is entropy label and will pop the same and send across.

What is Entropy Label Indicator?

In Some applications like MPLS VPN, egress PE will have Application label as the bottom most and so can understand that there is entropy label if the application label is with BoS=0. But there are few applications like CsC VPN, where the egress PE of Carrier provider will pop the application label and will send labeled packet to Carrier customer device. In this case, the application label will always be with BoS=0. So we need other way to identify if there is Entropy label. This is does by Entropy Label Indicator (ELI).


 
On control plane, Egress LER will signal ELI value (label assignment as usual) to remote Ingress LER devices. So when Entropy label is pushed by Ingress, it will push ELI label on top of Entropy label with BoS=0 and TTl=0.


 

How Entropy Label Support and optional Entropy Label Indicator signaled between LER?

With LDP signaling, a new SUB-TLV (Entropy Label SUB-TLV) is used (Type to be decided). It contains 20 bit “VALUE” space which will be zero when ELI is not required and will be non-zero if ELI is required and this value will be used in ELI label.

    0                   1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |U|F|        Type (TBD)         |           Length (8)          |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |               Value                   |     Must Be Zero      |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

With BGP Signaling, a new Optional, Transitive Path Attribute (Tentatively known as Entropy Label Attribute) will be used in BGP UPDATE while advertising the NLRI.

With RSVP-TE, Entropy Label Attribute TLV will be signaled in LSP_ATTRIBUTES OBJECT in both PATH message and RESV message.



How does it function at DataPlane?


 
1.       Ingress LER on receiving the packet will look into FIB table to identify the egress LER and the associated label details.
2.       Once Egress LER is identified, it will check if Entropy label is supported by egress LER. If it doesn’t support, it simply pushes the label details and send across to intermittent LSR devices.
3.       If Egress LER supports Entropy label and if it doesn’t require ELI, Ingress PE will push where TL is the Tunnel Label is the top label to reach Egress PE, AL is the Application Label and EL is the Entropy label with Entropy value which is calculated by running hashing function on KEYS from native packet; S=0; TTL=0.
4.       If Egress LER supports Entropy label and if it  requires ELI, Ingress PE will push where TL is the Tunnel Label is the top label to reach Egress PE, AL is the Application Label, ELI label is the one signaled by egress PE and EL is the Entropy label with Entropy value which is calculated by running hashing function on KEYS from native packet; S=0; TTL=0
5.       Any Transit LSR will use the value in Entropy Label if load balancing is required.
6.       Egress PE on receiving the MPLS packet will check if Application Label is set as Bottom of Stack. If yes, it will removes the last/bottom label and send across.
7.       If Application is not set as Bottom of stack, Egress PE will check if bottom label is set with S=0 and TTL=0. If yes, it will confirm if the ELI is the value advertised by self.
8.       If Application is not set as Bottom of stack, Egress PE will check if bottom label is set with S=1 and TTL=0, it understood that this is Entropy label and will remove before sending to CE device.

Thursday, May 9, 2013

Difference between T-MPLS & MPLS-TP



 

Wednesday, April 24, 2013

EVC framework : Flexible Service Mapping



Configuring service instances using the EVC framework : Flexible Service Mapping.

Probably the biggest advantage of the EVC framework is the ability to support multiple services per physical port. This means that under a single physical port you can have any of the following mixed together :

- 802.1q trunk
- 802.1q tunnel
- Local connect
- Scalable EoMPLS (EoMPLS xconnect)
- Multipoint Bridging (L2 bridging)
- Multipoint Bridging (VPLS, SVI-based EoMPLS)
- L3 termination

Besides all of the above, by using the EVC framework you can combine multiple different services from different physical ports, i.e. when using multipoint bridging (aka bridge-domains), in order to put them into the same virtual circuit.


 
 
 



Local connect is a L2 point-to-point service between two service instances on the same system. The service instances can by under the same port (hair-pinning) or under different ports. In contrast with the traditional L2 bridging, this one doesn't use any MAC learning and it's solely between 2 points. Also Local Connect doesn't require any global VLAN resource.

In order to have the following two service instances connect to each other by a L2 point-to-point service, you need first to remove their difference, which is the outer tag (you can also remove both tags).

interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10 second-dot1q 100
  rewrite ingress tag pop 1 symmetric

interface Gi1/2
 service instance 20 ethernet
  encapsulation dot1q 20 second-dot1q 100
  rewrite ingress tag pop 1 symmetric

! EVC-LC-10-20 is just a name for this point-to-point connectionconnect EVC-LC-10-20 Gi1/1 10 Gi1/2 20

Note : You can use the same service instance number under different physical ports.

In order to have the following two service instances be connected by Local Connect, you don't need any VLAN tag rewrite, because they both have the same vlans.

interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10-20

interface Gi1/2
 service instance 20 ethernet
  encapsulation dot1q 10-20

connect EVC-LC-10-20 Gi1/1 10 Gi1/2 20


In order to have the following two service instances be connected by Local Connect, you can either translate the vlan on one of them, or remove the tags on both of them.

interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10
  rewrite ingress tag translate 1-to-1 dot1q 20 symmetric

interface Gi1/2
 service instance 20 ethernet
  encapsulation dot1q 20

connect EVC-LC-10-20 Gi1/1 10 Gi1/2 20


Scalable EoMPLS or EoMPLS xconnect is a L2 point-to-point service between two service instances on different systems. Like Local Connect it doesn't use any MAC learning and it's solely between 2 points. It also doesn't require any global VLAN resource (this applies to scalable EoMPLS only; for SVI-based EoMPLS check VPLS below).

You can have any VLAN tag rewrite configuration under the service instances, as long as you keep in mind the following :

a) If both sides are EVC based, then you need to have common VLAN tag rewrite configurations on both sides
b) If one side is not EVC based, then depending on whether it's a physical interface or a subinterface, you'll probably need to remove one tag from the EVC side (subinterfaces remove tags by default)

Note : By default, VC type 5 is used for EoMPLS. In case VC type 4 is negotiated and used, an additional tag will be added after the VLAN tag rewrite configuration and before the data gets EoMPLS encapsulated.

7600-1
interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10
  xconnect 1.1.1.2 10 encapsulation mpls

7600-2
interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10
  xconnect 1.1.1.1 10 encapsulation mpls


Note : Have a look at Scalable EoMPLS for additional information.

Multipoint Bridging uses the concept of bridge-domains. Bridge-domain (BD) is like a traditional L2 broadcast domain where MAC-based forwarding is used for communication between participants (i'll try to write a new post with more details about bridge-domains). Bridge-domains use global VLAN resources.

In the following example, three service instances are put into the same bridge-domain by translating the tags where necessary.

interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10
  rewrite ingress tag translate 1-to-1 dot1q 20 symmetric
  bridge-domain 20

interface Gi1/2
 service instance 20 ethernet
  encapsulation dot1q 20
  bridge-domain 20

interface Gi1/3
 service instance 30 ethernet
  encapsulation dot1q 30
  rewrite ingress tag translate 1-to-1 dot1q 20 symmetric
  bridge-domain 20


The bridge-domain ID represents the global VLAN used in the system. Extra care needs to taken in case of L2 trunk/tunnel ports participating in a bridge-domain :

a) L2 trunk ports remove automatically the tag on ingress and add it automatically on egress. Equivalent EVC ports need that to be done manually by using the appropriate rewrite actions.
b) L2 tunnel ports add a new tag on ingress and remove it on egress. Equivalent EVC ports do not need any similar rewrite actions, because by default bridge-domains add a new tag on top of the already existing one.

In the following example two ports (a L2 trunk port and an EVC port) are put into the same bridge-domain (Vlan 20). Tag 10 needs to be removed from the EVC port before it joins bridge-domain 20.

interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10
  rewrite ingress tag pop 1 symmetric
  bridge-domain 20

interface Gi2/1
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 20
 switchport mode trunk


In the following example two ports (a L2 tunnel port and an EVC port) are put into the same bridge-domain (Vlan 20). On the EVC port, tag 20 is added on top of tag 10 in order to have the incoming frames join bridge-domain 20.

interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10
  bridge-domain 20

interface Gi2/1
 switchport access vlan 20
 switchport mode dot1q-tunnel


VPLS or SVI-based EoMPLS can be accomplished by configuring xconnect under a SVI. This SVI is the same as the one defined by the bridge-domain ID.

interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10-20
  bridge-domain 30

interface Gi1/2
 service instance 10 ethernet
  encapsulation dot1q 10-20
  bridge-domain 30

interface Vlan 30
  xconnect 1.1.1.2 10 encapsulation mpls


By adding "split-horizon" after the bridge-domain ID in both service instances, there can be no L2 communication between them.

interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10-20
  bridge-domain 30 split-horizon

interface Gi1/2
 service instance 10 ethernet
  encapsulation dot1q 10-20
  bridge-domain 30 split-horizon

interface Vlan 30
  xconnect 1.1.1.2 10 encapsulation mpls


By adding an additional tag through a rewrite action in both service instances, you can differentiate them, while they are being transfered through the same VC.

interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10-20
  rewrite ingress tag push dot1q 21 symmetric
  bridge-domain 30 split-horizon

interface Gi1/2
 service instance 10 ethernet
  encapsulation dot1q 10-20
  rewrite ingress tag push dot1q 22 symmetric
  bridge-domain 30 split-horizon

interface Vlan 30
  xconnect 1.1.1.2 10 encapsulation mpls


SVI-based EoMPLS can be considered like a VPLS, where there is only one VC pointing to one neighbor.

Note : Have a look at SVI-based EoMPLS for additional information.

For L3 termination you have the usual two options : use subinterfaces or use bridge-domains (just like switchports) and SVIs. ES/ES+ and SIP-400 cards support termination of double-tagged traffic too.

Keep in mind the following :

a) you must remove all tags before terminating L3 traffic
b) you must use matching rules based on unique single or double tags (no vlan ranges are supported, although they might be accepted)

This is an example using a bridge-domain and the equivalent SVI:
interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10
  rewrite ingress tag pop 1 symmetric
  bridge-domain 40

interface Gi1/2
 service instance 10 ethernet
  encapsulation dot1q 20 second-dot1q 30
  rewrite ingress tag pop 2 symmetric
  bridge-domain 40

interface Vlan 40
  ip address 1.1.1.1 255.255.255.0


This is an example using subinterfaces:

interface Gi1/1.10
 encapsulation dot1q 10
 ip address 1.1.1.1 255.255.255.0

interface Gi1/1.20
 encapsulation dot1q 20 second-dot1q 30
 ip address 1.1.1.1 255.255.255.0


Note : ES cards have a major limitation : single-tagged vlans configured under a subinterface are global significant. On the other hand, double-tagged vlans are local significant. On the ES+ and SIP-400 cards, both single-tagged and double-tagged vlans are local significant.
 

Friday, April 12, 2013

EVC : Flexible VLAN Tag Rewrite



Following the previous post about Flexible Frame Matching, this new post describes the second major step in configuring service instances using the EVC framework : Flexible VLAN Tag Rewrite.

Each service instance can change the existing VLAN tag to be a new VLAN tag by adding, removing, or translating one or two VLAN tags. Flexible VLAN tag rewrite includes 3 main operations :

1) pop (remove an existing tag)
2) push (add a new tag)
3) translate (change one or two tags to another one or two tags) - this can be seen as a combination of pop and push operations

Theoretically, any existing combination of one or two VLAN tags can be changed to any new combination of one or two VLAN tags by just using a simple (as soon as you get the idea) line of configuration. Practically, there are some limitations what you'll see below.

These are the relevant CLI options under the service instance (you need first to have configured flexible frame matching for these to appear) :

7600(config-if-srv)#rewrite ingress tag ?
pop        Pop the tag
push       Rewrite Operation of push
translate  Translate Tag


Pop operation
7600(config-if-srv)#rewrite ingress tag pop ?
1  Pop the outermost tag
2  Pop two outermost tags

! remove one tag
7600(config-if-srv)#rewrite ingress tag pop 1 ?
symmetric  Tag egress packets as specified in encapsulation


! remove two tags
7600(config-if-srv)#rewrite ingress tag pop 2 ?
symmetric  Tag egress packets as specified in encapsulation



Push operation
7600(config-if-srv)#rewrite ingress tag push ?
dot1q  Push dot1q tag

! add one tag
7600(config-if-srv)#rewrite ingress tag push dot1q ?
<1-4094>  VLAN id

7600(config-if-srv)#rewrite ingress tag push dot1q 20 ?
second-dot1q  Push second dot1q tag
symmetric     Tag egress packets as specified in encapsulation


! add two tags
7600(config-if-srv)#rewrite ingress tag push dot1q 20 second-dot1q ?
<1-4094>  VLAN id

7600(config-if-srv)#rewrite ingress tag push dot1q 20 second-dot1q 30 ?
symmetric  Tag egress packets as specified in encapsulation



Translate operation
7600(config-if-srv)#rewrite ingress tag translate ?
1-to-1  Translate 1-to-1
1-to-2  Translate 1-to-2
2-to-1  Translate 2-to-1
2-to-2  Translate 2-to-2

! remove one tag and add one new tag
7600(config-if-srv)#rewrite ingress tag translate 1-to-1 dot1q 20 ?
symmetric  Tag egress packets as specified in encapsulation


! remove one tag and add two new tags
7600(config-if-srv)#rewrite ingress tag translate 1-to-2 dot1q 20 second-dot1q 30 ?
symmetric  Tag egress packets as specified in encapsulation


! remove two tags and add one new tag
7600(config-if-srv)#rewrite ingress tag translate 2-to-1 dot1q 20 ?
symmetric  Tag egress packets as specified in encapsulation


! remove two tags and add two new tags
7600(config-if-srv)#rewrite ingress tag translate 2-to-2 dot1q 20 second-dot1q 30 ?

symmetric  Tag egress packets as specified in encapsulation



Examples
interface GigabitEthernet1/2
!
service instance 10 ethernet
encapsulation dot1q 10
! remove one tag (10) on ingress
! add one tag (10) on egress
rewrite ingress tag pop 1 symmetric
!
service instance 20 ethernet
encapsulation dot1q 10 second-dot1q 20
! remove two tags (10/20) on ingress
! add two tags (10/20) on egress
rewrite ingress tag pop 2 symmetric
!
service instance 30 ethernet
encapsulation dot1q 30
! add one tag (300) on ingress
! remove one tag (300) on egress (if the resulting frame doesn't match tag 30, it's dropped)
rewrite ingress tag push dot1q 300 symmetric
!
service instance 40 ethernet
encapsulation dot1q 40
! add two tags (400/410) on ingress
! remove two tags (400/410) on egress (if the resulting frame doesn't match tag 40, it's dropped)
rewrite ingress tag push dot1q 400 second-dot1q 410 symmetric
!
service instance 50 ethernet
encapsulation dot1q 50 second-dot1q 1-4094
! remove one tag (50) and add one new tag (500) on ingress
! remove one tag (500) and add one new tag (50) on egress
! the inner tags (1-4094) remain unchanged
rewrite ingress tag translate 1-to-1 dot1q 500 symmetric
!
service instance 60 ethernet
encapsulation dot1q 60
! remove one tag (60) and add two new tags (600/610) on ingress
! remove two tags (600/610) and add one new tag (60) on egress
rewrite ingress tag translate 1-to-2 dot1q 600 second-dot1q 610 symmetric
!
service instance 70 ethernet
encapsulation dot1q 70 second-dot1q 100
! remove two tags (70/100) and add one new tag (700) on ingress
! remove one tag (700) and add two new tags (70/100) on egress
rewrite ingress tag translate 2-to-1 dot1q 700 symmetric
!
service instance 80 ethernet
encapsulation dot1q 80 second-dot1q 200
! remove two tags (80/200) and add two new tags (800/810) on ingress
! remove two tags (800/810) and add two new tags (80/200) on egress
rewrite ingress tag translate 2-to-2 dot1q 800 second-dot1q 810 symmetric


There are some important things to keep in mind when configuring Flexible VLAN Tag Rewrite.

1) You have to use the "symmetric" keyword, although the CLI might not give you this impression:

7600(config-if-srv)#rewrite ingress tag pop 1 ?
symmetric  Tag egress packets as specified in encapsulation


7600(config-if-srv)#rewrite ingress tag pop 1
Configuration not accepted by the platform
7600(config-if-srv)#rewrite ingress tag pop 1 symmetric
7600(config-if-srv)#


Generally rewrite configurations should always be symmetric. Whatever rewrites are on the ingress direction, you should have the reverse rewrites on the egress direction for the same service instance configuration. So, if you pop the outer VLAN tag on ingress direction, then you need to push the original outer VLAN tag back on the egress direction for that same service instance. All this is done automatically by the system when using the "symmetric" keyword. Have a look at the examples included above and check the comments to see what operations are happening on ingress and egress.

2) Due to the mandatory symmetry, some operations can only be applied to a unique tag matching service instance (so they are not supported for VLAN range configurations) or cannot be applied at all.

i.e.
You cannot translate a range of vlans
7600(config-if-srv)#encapsulation dot1q 10 - 20
7600(config-if-srv)#rewrite ingress tag translate 1-to-1 dot1q 30 symmetric
Encapsulation change is not logically valid.


You cannot pop a range of vlans
7600(config-if-srv)#encapsulation dot1q 10 second-dot1q 20,30
7600(config-if-srv)#rewrite ingress tag pop 2 symmetric
Encapsulation change is not logically valid.


If supposedly you could do the above, how could the opposite be done? i.e. if the system removed the tags from frames matching inner vlans 20,30 on the ingress, how would the system know on which frames to add 20 and on which to add 30 on the egress?

Of course you can push a new vlan over a range of vlans.
7600(config-if-srv)#encapsulation dot1q 10-20
7600(config-if-srv)#rewrite ingress tag push dot1q 30 symmetric


You can only push one or two tags for "encapsulation untagged" and "encapsulation default". No pop or translate operations are supported.

As a rule you can think of "you cannot pop or translate something that is not specifically defined as a single unit". Just imagine what would happen in the opposite direction and everything should become clear.

Keep in mind that some configurations might be accepted, but they won't work.

3) You cannot have more than one VLAN tag rewrite configuration under a single service instance. That means you can have either none or one. If there is no VLAN tag rewrite configuration, the existing VLAN tag(s) will be kept unchanged. If you need more than one, you might want to try to create more service instances using more specific frame matching criteria on each one. The translate operation might also seem useful in such conditions.

4) You need to be extra careful when using Flexible VLAN Tag Rewrite and Bridge Domains. Flooded (broadcast/multicast/unknown unicast) packets will get dropped by the service instances that do not agree on the egress tag. Although all service instances under a common bridge domain will get the flooded frame, there is an internal validation mechanism that checks whether the result of egress rewrite (based on the opposite of ingress rewrite) will allow the flooded frame to pass. The push operations under the examples show this behavior.

5) To have an EVC based port act like a L2 802.1q trunk port, you need to remove the outer tag manually and then put it under a bridge domain. On normal L2 switchports this is done automatically by the system.

So this
interface Gi1/1
 switchport
 switchport mode trunk
 switchport trunk allowed vlan 10

is equivalent to this
interface Gi1/1
 service instance 10 ethernet
  encapsulation dot1q 10
  rewrite ingress tag pop 1 symmetric
  bridge-domain 10

Note: The above examples were done on a 7600 with ES+ cards running 12.2(33)SRB IOS.
 

Monday, April 8, 2013

EVC : Flexible Frame Matching


EVC stands for Ethernet Virtual Connection and in Cisco's platforms it's used to represent Cisco's software architecture to address Carrier Ethernet Services. In MEF (Metro Ethernet Forum) terminology EVC means "Ethernet Virtual Connection/Circuit", but here EVC represents also the whole Carrier Ethernet software infrastructure developed by Cisco.

EVC has many advantages one of them being the Flexible Frame Matching. Flexible Frame Matching is a functionality that allows each service instance to match frames with either a unique single vlan, or a list/range of vlans. It can also match single/double tagged frames, untagged frames, or everything else that belongs to the default category.

Flexible Frame Matching is the first major step after configuring a service instance. This is the complete idea:

1) Service Instance definition (create the service instance)
2) Flexible frame matching (configure what frames need to be matched based on vlan match criteria)
3) Flexible VLAN tag rewrite (configure the action to do on the matched frames' vlan tags)
4) Flexible Service Mapping (map the service instance to a service)
5) Extra service features (apply some extra features on the service instance, i.e. QoS)

The middle 3 most important steps can also be described as:

a) Frame matching
b) Frame rewrite
c) Frame forwarding

Example
interface Gi1/1
 ! Service Instance definition
 ! ID is local port significant
service instance 10 ethernet
  ! Flexible frame matching
encapsulation dot1q 10 second-dot1q 20
  ! Flexible VLAN tag rewrite
rewrite ingress tag pop 1 symmetric 
  ! Service Mapping
xconnect 10.10.10.10 100 encapsulation mpls 
  ! Extra service features
service-policy input TEST-INPUT-POLICY

The current EVC implementation supports matching only on vlan tags, but in future we may see matching on other L2 fields too, since the hardware is quite capable.

These are the current supported vlan matching configurations:

Single tagged frames, where match criteria can be a single vlan, a list/range of vlans, or any vlan (1-4094)
encapsulation dot1q 
encapsulation dot1q , 
encapsulation dot1q  - 
encapsulation dot1q any

Double tagged frames, where first VLAN tag can be only single (software limitation), while second VLAN tag can be single, list/range, or any
encapsulation dot1q  second-dot1q 
encapsulation dot1q  second-dot1q , 
encapsulation dot1q  second-dot1q  - 
encapsulation dot1q  second-dot1q any

Untagged frames, where all untagged frames are matched
encapsulation untagged

Default tag frames, where all tagged/untagged frames that are not matched by other more specific service instances are matched
encapsulation default


Examples
interface Gi1/1
!
service instance 10
  ! single tagged frames with a specific tag
encapsulation dot1q 10
!
service instance 20
  ! single tagged frames with multiple tags
encapsulation dot1q 20,22,24,26-28
!
service instance 30
  ! single tagged frames with any tag
encapsulation dot1q any
!
service instance 40
  ! frames with a specific single outer tag and specific single inner tag
encapsulation dot1q 10 second-dot1q 20
!
service instance 50
  ! frames with a specific single outer tag and multiple inner tags
encapsulation dot1q 10 second-dot1q 20,22,24,26-28
!
service instance 60
  ! frames with a specific single outer tag and any inner tag
encapsulation dot1q 10 second-dot1q any
!
service instance 70
  ! frames without a tag
encapsulation untagged
!
service instance 80
  ! frames that do not match under any other service instance
encapsulation default


There are some important things to keep in mind when configuring Flexible Frame Matching.

1) When you have multiple vlan match criteria configured under different service instances of a single physical interface, the most specific is the one that wins (it's like the longest match rule used in the routing table). So the order of service instances under an interface doesn't have the same effect like the classes in MQC. This is because frame matching is done by hardware using the linecard's TCAM table, where each frame matching configuration gets converted to 1 or more TCAM entries (vlan lists/ranges in matching criteria are the most TCAM consuming configurations). The number of 16000 service instances per ES20 module is based on the assumption that each service instance uses a single TCAM entry.

2) When you don't get any match according to the above longest match rule, matching is done according to a looser match algorithm, where a single tag configuration matches all frames that have a common outer tag (regardless of the number of inner tags) and a double tag configuration matches all frames that have common the first 2 tags (regardless of the number of 2+ inner tags; btw, i'm planning of doing a triple-tag test soon).

Example
interface G1/1
service instance 10 ethernet
encapsulation dot1q 10
service instance 20 ethernet
encapsulation dot1q 10 second-dot1q 20
service instance 30 ethernet
encapsulation default

On the above configuration:

10/20 will be matched by service instance 20 (both tags matched)
10/30 will be matched by service instance 10 (outer tag matched)
20/30 will be matched by service instance 30 (no tag matched)

"encapsulation dot1q 10" matches "10", "10/20", "10/30" and so on.
"encapsulation dot1q 10 second-dot1q 20" matches "10/20", "10/20/30", "10/20/40" and so on.

Note: The above examples were done on a 7600 with ES+ cards running 12.2(33)SRB IOS.

 

Saturday, April 6, 2013

Amazing Lecture on MPLS and Traffic Engineering Technologies

The title says it all. Professor Karandikar gives two amazing lectures on MPLS and MPLS-TE that most engineers should hope to know cold. It is an amazing amount of material delivered very precisely. Great stuff!

Lecture 24 - MPLS





Lecture 25 - Traffic Engineering








Figure 1. Ingress Packet Workflow.




Figure 2. MPLS Stacks.







 

Monday, December 31, 2012

Cisco OTV : Quick overview

 
When facing a multiple virtualized datacenter challenge, one easily would pick a layer2 backbone between datacenters because of Virtual Machines able to vMotion to another datacenter. Layer2 has not been developed to be scalable between datacenters however. One must realize that all Layer2 errors ocurring in on datacenter can easily spread over the Layer2 link towards all you DRP datacenters… This blogpost is about OTV, an answer to datacenter scalability while still having Layer2 connectivity between end devices.
 

Important OTV Features

Scalability

  • Extends Layer 2 LANs over any network that supports IP
  • Designed to scale across multiple data centers

Simplicity

  • Supports transparent deployment over existing network without redesign
  • Requires minimal configuration commands (as few as four)
  • Provides single-touch site configuration for adding new data centers

Resiliency

  • Preserves existing Layer 3 failure boundaries
  • Provides automated multihoming
  • Includes built-in loop prevention

Efficiency

  • Optimizes available bandwidth, by using equal-cost multipathing and optimal multicast replication
 
So how does OTV work ? It will create a custom Layer2 network on top of you existing Layer3 network by encapsulation all Layer2 packets destined for another datacenter in Layer3 packets. Broadcasts, boot storms, spanning-tree loops will all be contained in one datacenter as these packets will be filtered or not forwarded at all. When your backbone consists of an MPLS network, you can achieve any-to-any Layer2 datacenter connectivity without any of the risks or downsides… pretty neat!
 
 
The requirements for OTV are pretty simple : you need a Nexus 7000 series with a M2 line card. The F1 and F2 line cards do not support the OTV feature. If your Nexus 7000 acts as the default gateway with VLAN SVI’s you will have to create a separate OTV VDC as SVI’s and OTV are not compatible.
 
You should also have a clear isolation model for your FHRP protocols. In a traditional Layer2 extension, you would have an FHRP active-standby scenario over multiple datacenters. This is still possible with OTV, however this might lead to inefficient use of bandwidth and latency. If you decide to have an active FHRP in each datacenter, you will need to manually isolate the FHRP to one datacenter.
 
I’m unsure where to place OTV in a solution perspective. It seems it’s more like a enterprise solution as it supports only 256 vlan’s at the moment. However with the VDC capability of the Nexus 7000 it’s possible to create up to 8 VDC’s per Nexus switch creating the opportunity to sent up to 2048 vlan’s over 8 different OTV networks. The design of such a solution seems a bit cumbersome to me.
 
Another limitation is the amount of OTV devices in one site, which is limited to 2, and total OTV sites in total, which is limited to 6.
 
If your a service provider and thinking about OTV as your datacenter interconnect protocol, think about your vlan strategy before deploying virtual environments and DRP datacenters.
 

Monday, November 26, 2012

HP EVI vs. Cisco OTV: A Technical Look

 
HP announced two new technologies in the late summer, Multitenant Device Context (MDC) and Ethernet Virtual Interconnect (EVI), that target private clouds. Mike Fratto outlined the business and market positions, particularly in regard to Cisco's Overlay Transport Virtualization (OTV) and Virtual Device Context. However, the technology is also interesting because it's a little different than Cisco's approach. This post will drill into HP's EVI and contrast it with Cisco's OTV, as well as with VPLS.
 
HP EVI supports Layer 2 Data Center Interconnect (L2 DCI). L2 DCI technology is a broad term for technologies that deliver VLAN extension between data centers. Extending VLANs lets virtual machines move between data centers without changing a VM's IP address (with some restrictions). The use cases for such a capability include business continuity and disaster recovery. For a more extensive discussion of L2 DCI, please see the report The Long-Distance LAN.
 
HP EVI is a MAC-over-GRE-over-IP solution. Ethernet frames are encapsulated into GRE/IP at ingress to the switch. The GRE/IP packets are then routed over the WAN connection between the data centers.
 
EVI adds a software process to act as control plane to distribute the MAC addresses in each VLAN between the EVI-enabled switch. Thus, the switch in data center A updates the MAC address table in data center B and vice versa. By contrast, in traditional use, Ethernet MAC addresses are auto-discovered as frames are received by the switch.
 
Because GRE packets are TCP/IP packets they can be routed over any WAN connection, making it widely useful for customers. In a neat bit of synergy, the HP Intelligent Resilient Framework (IRF) chassis redundancy feature means that WAN connections are automatically load-balanced because switches that are clustered in an IRF configuration act as a single switch (a Borg architecture, not an MLAG architecture). Therefore, multiple WAN connections between IRF clusters are automatically load-balanced by the control plane either as LACP bundles or through ECMP IP routing, which is a potential improvement over Cisco's OTV L2 DCI solution.
 
However, note that load balancing of the end-to-end traffic flow is not straightforward because there are three connections to be considered: LAN-facing, to the servers using MLAG bundles; WAN-facing, where the WAN links go from data center edge switches to the service provider; and intra-WAN, or within the enterprise or service provider WAN. Establishing the load balancing capabilities of each will take some time.
chart: comparing HP EVI with Cisco OTV and VPLS

Because HP has chosen to use point-to-point GRE, the EVI edge switch must perform packet replication. Ethernet protocols such as ARP rely heavily on broadcasts to function. In a two-site network this isn't problem, but for three sites or more, the EVI ingress switch needs to replicate a broadcast EVI frame to every site. HP assures me that this can be performed at line rate, for any speed, for any number of data centers. That may be so, but creating full mesh replication for n* (n-1) WAN circuits could result in poor bandwidth utilization in networks that have high volumes of Ethernet broadcasts.
 
Cisco's OTV is also MAC-over-GRE-over-IP (using EoMPLS headers), but it adds a small OTV label into the IP header. The OTV control plane acts to propagate the MAC address routing table.
 
Like HP's EVI, OTV can complicate load balancing. Cisco's Virtual Port Channel (vPC) shares the control plane, while HP's IRF shares the data plane. Although a vPC-enabled pair of Nexus 7000 switches run as autonomous control planes, NX-OS can load balance evenly using IP. OTV load balances by using a 5-tuple hash and will distribute traffic over multiple paths for the WAN.
 
OTV also supports the use of multicast routing in the WAN to deliver a much more efficient replication of Ethernet broadcasts in large-scale environments. Instead of meshing a large DCI core, a Source Specific Multicast (with good reasons) should be more efficient for multiple sites. Badly designed applications, such as Microsoft NLB, will be much more efficient using multicast.
 
EVI Compared To MPLS/VPLS
 
For many enterprises, MPLS is not a consideration. MPLS is a relatively complex group of protocols that requires a fair amount of time to learn and comprehend. However, building mission-critical business services that aren't MPLS is really hard. Service providers can offer L2 DCI using their MPLS networks with VPLS. Operationally, enterprise infrastructure is diverse and customised to each use case. Service provider networks tend toward homogeneity and simplicity because of the scale.
 
Some enterprises will buy managed VPLS services from service providers. They will also discover that such VPLS services are of variable quality, offer poor loop prevention, and can be expensive and inefficient. (For more, see the above-referenced report.) This is what drives Cisco and HP to deliver better options in OTV and EVI.
 
Other EVI Claims
 
HP notes that its solution doesn't require "multicast in the default configuration." HP wants to contrast itself to Cisco's OTV, which uses Source Specific Multicast in the WAN core, because many network engineers might think configuring multicast to be too hard. Building an SSM design over a Layer 3 WAN core is a substantial requirement and not a technology that most enterprise engineers would be comfortable configuring. On the other hand, configuring SSM over a Layer 2 WAN core (using Dark Fibre or DWDM) is trivial.
 
However, Cisco OTV has a unicast mode that works in a similar way to HP EVI, which most engineers would choose for simplicity. That said, the SSM WAN core offers scaling and efficiency if you need it, while HP's EVI does not.
 
The HP EVI approach is potentially more effective at load balancing WAN circuits than 5-tuple hashing in Cisco OTV, but it's unlikely to make much difference in deployment.
 
EVI's Enterprise Value
 
HP EVI is aimed at enterprises with private clouds. The technology looks like a good strategy. HP says EVI will be available in the A12500 switch in December. HP has a poor history of delivering on time (we're still waiting for TRILL and EVB), so plan accordingly. Cisco OTV is shipping and available in the Nexus 7000 and ASR products (for substantial license fees). HP says it won't charge for EVI.
 
Private clouds are shaping up to be a huge market in the next five years, and HP is addressing this space early by bringing L2 DCI capabilities to its products. HP EVI looks to be a good technology to meet customer needs. Combined with Multitenant Device Content, it should keep HP on competitive footing with Cisco. Of course, it's easy to make this kind of technology work in a PowerPoint. We'll have to wait and see how it works in real deployments.
 
 

Friday, August 17, 2012

Important Points in MPLS network Design

Courtesy - Broadband Nation

Before jumping into MPLS (Multi-Protocol Label Switching) for your network design there's important items to consider. Take a step back and first consider what you need your network to do, how, and what must happen if there are issues.

Intent of the network is definitely a critical piece. It is so important to understand what you are putting over the network in order to engineer the optimal network. I've had the same conversations with clients when they realize that they can't have as many call paths as they would like and still be able to surf the net.

Also, business continuity is definitely key. Documenting the plan and understanding how traffic should flow in the event the primary path is unavailable for any reason ensures you have survivability in the instance of an interruption of service or outage of any kind.

If you understand the intent, you can accurately plan for outages or interruptions in your disaster plan. Most times, you don't need to have every type of traffic pass over the MPLS during an outage. You need to understand what is most important to your business, what has the biggest impact on your revenue, and then design a plan that ensures that you don't lose that piece of the puzzle for any length of time.

I think the most important piece is not even the Disaster Recovery plan, but more so the "business continuity" piece. The reality is you want the design to be flawless and address the rare occasion prior to it happening. With having a "business continuity" plan in place this allows for you to continue business seamlessly in the event something does happen. The Disaster Recovery plan will only address how do you recover in the event of an outage, cost, and time associated with the disaster. Business continuity will minimize that impact of these three and ensure you are still operational during this time. The other important pieces of course are the speed and security of the network.

Also key now that I think about it is default route pathing. This ties in to the intent of the network. Even if the original intent of the MPLS network is not to pass internet traffic .... having dynamic routing on the core so that one site can piggy back off of another in the case of an internet connection outage at one of the sites, is often one of the most useful side features that everyone always seems to forget or overlook. Sometimes this can be done with you changing your firewall/router's default gateway to point to the MPLS hop manually or can be done with OSPF/EIGRP.

Thursday, August 16, 2012

MPLS vs VPLS


MPLS is the typical underlying plumbing for a carrier style network core that can support L3 VPNs, VPLS and various other services, although other "stuff" can be used.

MPLS is the common way to do this and seems to be "best common practice" right now.

The carriers need big, large scale, flexible core networks that can support traffic from lots of customers on a common set of equipment and resilient WAN links, but provide separation between them.

VPLS mainly assumes Ethernet delivery, so can be good where that is available for all your needed locations, but may be limiting.

VPLS emulates a switched Ethernet LAN and as such suffers from the related scaling and diagnostic issues - i would not want to use VPLS to connect more than a few dozen sites in a single network, and you will need routers to control traffic well before that point.

Some other "Ethernet over cloud" type systems give different sets of tradeoffs - Ethernet pseudowire services for example scale better but need more detail design and planning.

L3 VPNs will work with any type of access - conventional 1.5m and 2m links may be all you can get in some places / countries, or you might need VSAT for that location in Africa and so on....

Many carriers have NNI links to others for L3 VPNs, so you can get to places outside their geography. International broadcasters for example may want links to every continent.

L3 is easier to use with QoS, works better with multicast and various others services.

L3 means the carrier(s) are involved in the IP topology, and in turn that may limit what protocols you can use - that may work well for you or just cause more hassle.

None of these issues are black and white, but a specific set of requirements will "push" you towards 1 type of system.

And real life is complicated - you may end up using both for a big network.

Technical jargon aside, I would break down the CIO’s comparison of the two technologies into two main categories:

1) Immediate Impact on the Organization’s Strategy

a. How does each technology meet the organization’s short-term needs/requirements?

b. What value does each technology add to both the organization (and the IT group)?

c. What’s the immediate impact on application performance (i.e. User Experience, etc…)?

d. What’s the immediate impact as it relates to budget?

2) Long-Term support for changes to that Strategy

a. Which solution can easily support business changes?

i. Do both solutions scale geographically?

ii. Can both solutions support ancillary services?

b. Which solution best supports technological changes in the business?

i. Consolidation/Centralization Strategies

ii. Convergence Strategies

iii. Cloud / SaaS Strategies

c. What impact, if any, does either solution have on

i. Long-term revenue generation.…

ii. Long-term cost containment.…

In other words ... do a solid business case analysis. NOT just a technical review. Accomplishing this will also help you when it comes time to "sell it" to senior management.

Courtesy - Broadband Nation

Sunday, July 1, 2012

MPLS spec introduced for cellular back-haul network service

For anyone performing cellular back-haul, there’s a new specification for handling wireless data traffic from a combination of traditional TDM networks and packet-based transport technologies as wireless operators migrate from 2G/3G to 4G and LTE services.


The Broadband Forum has just issued its “Technical Specification for MPLS in Mobile Backhaul Networks,” also known as TR-221.

TR-221 focuses on the applications of MPLS technology in a range of services that may be used to transport wireless traffic in the access and aggregation networks, including IP, TDM, ATM and Ethernet.

It defines the global requirements of MPLS technology in these networks in respect of encapsulation, signalling and routing, QoS, OAM, resiliency, security, and synchronization. It also covers expected services over the back-haul network, including voice, multimedia services, data traffic and multicast traffic, such as multimedia broadcast and multicast services (MBMS).

Adherence to these requirements will create global standards for MPLS-oriented equipment, establishing more network interoperability, speeding deployments and lowering the overall costs of the backhaul network, the Broadband Forum said.

Defining a range of reference architectures for MPLS-based mobile backhaul networks, TR-221 includes specifications for the various transport scenarios applicable to all mobile networks (2G, 3G and LTE). It also specifies the equipment requirements for the control, user and management planes to provide unified and consistent end-to-end transport services for mobile backhaul.



Robin Mersh, CEO of the Broadband Forum, said: “TR-221 is a critical part of establishing multi-vendor interoperability in converged MPLS-based backhaul networks. As mobile operators look to preserve their investment in traditional TDM and ATM networks whilst developing their 4G/LTE architectures, TR-221 will enable them to integrate new packet-based MPLS technologies into their established networks. Operators will be able to evolve their networks to be faster and more efficient to meet the increasing multimedia needs of the mobile user, whilst preserving a lower cost per bit in the backhaul network.”

Saturday, June 30, 2012

Advanced MPLS Interview Questions - Part 2

1. What group is responsible for creating MPLS standards?

The IETF's MPLS Working Group is charged with establishing core MPLS standards. Other IETF working groups are charged with developing standards covering areas such as Generalized MPLS, MPLS network management, Layer 2 encapsulation, L2 & L3 VPN services, and MPLS Traffic Engineering.

In addition, industry groups such as the Optical Internetworking Forum (OIF), The Optical Ethernet Forum, and the MFA Forum (MPLS/Frame/ATM) are working on other MPLS standards not related to the areas of focus of the IETF.

2. What is the MFA Forum?

The MFA is the union of the MPLS Forum, Frame Relay Forum, and ATM Forum. The MFA is an industry consortium dedicated to accelerating the adoption of Multiprotocol Label Switching (MPLS) and its associated technologies.

3. What MPLS related mailing lists are there and what are they used for?

There following is a list of current MPLS-related mailing lists:
The IETF's MPLS Working Group mailing list, which can be joined https://www1.ietf.org/mailman/listinfo/mpls. This list is for discussion of MPLS standards development. Note that several of the other IETF working groups also host mailing lists for discussion of MPLS standards for specific applications.

The MPLS-OPS mailing list, which can be joined by visiting http://www.mplsrc.com/mplsops.shtml. This list is for the discussion of issues related to the design, deployment and management of MPLS-based networks

LINUXMPLS - A Yahoo-based group and mailing list for the discussion of MPLS implementations for LINUX can be accessed at:

http://groups.yahoo.com/group/linuxmpls

4. What is MPLS?

MPLS stands for "Multiprotocol Label Switching". In an MPLS network, incoming packets are assigned a "label" by a "label edge router (LER)". Packets are forwarded along a "label switch path (LSP)" where each "label switch router (LSR)" makes forwarding decisions based solely on the contents of the label. At each hop, the LSR strips off the existing label and applies a new label which tells the next hop how to forward the packet.

Label Switch Paths (LSPs) are established by network operators for a variety of purposes, such as to guarantee a certain level of performance, to route around network congestion, or to create IP tunnels for network-based virtual private networks. In many ways, LSPs are no different than circuit-switched paths in ATM or Frame Relay networks, except that they are not dependent on a particular Layer 2 technology.

An LSP can be established that crosses multiple Layer 2 transports such as ATM, Frame Relay or Ethernet. Thus, one of the true promises of MPLS is the ability to create end-to-end circuits, with specific performance characteristics, across any type of transport medium, eliminating the need for overlay networks or Layer 2 only control mechanisms.

To truly understand "What is MPLS", RFC 3031 - Multiprotocol Label Switching Architecture, is required reading.


5. How did MPLS evolve?

MPLS evolved from numerous prior technologies including Cisco's "Tag Switching", IBM's "ARIS", and Toshiba's "Cell-Switched Router". More information on each of these technologies can be found at http://www.watersprings.org/links/mlr/.

The IETF's MPLS Working Group was formed in 1997.

6. What problems does MPLS solve?

The initial goal of label based switching was to bring the speed of Layer 2 switching to Layer 3. Label based switching methods allow routers to make forwarding decisions based on the contents of a simple label, rather than by performing a complex route lookup based on destination IP address. This initial justification for technologies such as MPLS is no longer perceived as the main benefit, since Layer 3 switches (ASIC-based routers) are able to perform route lookups at sufficient speeds to support most interface types.

However, MPLS brings many other benefits to IP-based networks, they include:

Traffic Engineering - the ability to set the path traffic will take through the network, and the ability to set performance characteristics for a class of traffic

VPNs - using MPLS, service providers can create IP tunnels throughout their network, without the need for encryption or end-user applications

Layer 2 Transport - New standards being defined by the IETF's PWE3 and PPVPN working groups allow service providers to carry Layer 2 services including Ethernet, Frame Relay and ATM over an IP/MPLS core

Elimination of Multiple Layers - Typically most carrier networks employ an overlay model where SONET/SDH is deployed at Layer 1, ATM is used at Layer 2 and IP is used at Layer 3. Using MPLS, carriers can migrate many of the functions of the SONET/SDH and ATM control plane to Layer 3, thereby simplifying network management and network complexity. Eventually, carrier networks may be able to migrate away from SONET/SDH and ATM all-together, which means elimination of ATM's inherent "cell-tax" in carrying IP traffic.

7. What is the status of the MPLS standard?

Most MPLS standards are currently in the "Internet Draft" phase, though several have now moved into the RFC-STD phase. See "MPLS Standards" for a complete listing of current ID's and RFC's. For more information on the current status of various Internet Drafts, see the IETF's MPLS Working Group home page at http://www.ietf.org/html.charters/mpls-charter.html

There's no such thing as a single MPLS "standard". Instead there a set of RFCs and IDs that together allow the building of an MPLS system. For example, a typical IP router spec. sheet will list about 20 RFCs to which this router will comply. If you go to the IETF web site (http://www.ietf.org), then click on "I-D Keyword Search", enter "MPLS" as your search term, and crank up the number of items to be returned, (or visit http://www.mplsrc.com/standards.shtml) you'll find over 100 drafts currently stored. These drafts have a lifetime of 6 months.

Some of these drafts have been adopted by the IETF WG for MPLS. The filename for these drafts is prefixed by "draft-ietf-". Some of these drafts are now on the IETF Standards Track. This is indicated in the first few lines of the document with the term "Category: Standards Track". You can read up on this process in RFC 2600.

MPLS Components

8. What is a Label?

Section 3.1 of RFC 3031: "Multiprotocol Label Switching Architecture" defines a label as follows "A label is a short, fixed length, locally significant identifier which is used to identify a FEC. The label which is put on a particular packet represents the "Forwarding Equivalence Class" to which that packet is assigned."

The MPLS Label is formatted as follows:

-20bits Label--3bits CoS--1bit Stack--8bits TTL-

The 32-bit MPLS label is located after the Layer 2 header and before the IP header. The MPLS label contains the following fields:

The label field (20-bits) carries the actual value of the MPLS label.

The CoS field (3-bits) can affect the queuing and discard algorithms applied to the packet as it is transmitted through the network.

The Stack (S) field (1-bit) supports a hierarchical label stack.

The TTL (time-to-live) field (8-bits) provides conventional IP TTL functionality. This is also called a "Shim" header.

9. What is a Label Switch Path?

An LSP is a specific path traffic path through an MPLS network. An LSP is provisioned using Label Distribution Protocols (LDPs) such as RSVP-TE or CR-LDP. Either of these protocols will establish a path through an MPLS network and will reserve necessary resources to meet pre-defined service requirements for the data path.

LSPs must be contrasted with traffic trunks. From RFC 2702: "Requirements for Traffic Engineering Over MPLS," "A traffic trunk is an aggregation of traffic flows of the same class which are placed inside a LSP. It is important, however, to emphasize that there is a fundamental distinction between a traffic trunk and the path, and indeed the LSP, through which it traverses. In practice, the terms LSP and traffic trunk are often used synonymously. The path through which a trunk traverses can be changed. In this respect, traffic trunks are similar to virtual circuits in ATM and Frame Relay networks."

10. What is a Label Distribution Protocol?

A label distribution protocol (LDP) is a specification which lets a label switch router (LSR) distribute labels to its LDP peers. When a LSR assigns a label to a forwarding equivalence class (FEC) it needs to let its relevant peers know of this label and its meaning and LDP is used for this purpose. Since a set of labels from the ingress LSR to the egress LSR in an MPLS domain defines a Label Switched Path (LSP) and since labels are mapping of network layer routing to the data link layer switched paths, LDP helps in establishing a LSP by using a set of procedures to distribute the labels among the LSR peers.

Label Switching Routers (LSRs) use labels to forward traffic. A fundamental step to Label Switching is that LSRs agree on the what labels they should use to forward traffic. They come to this common understanding by using the Label Distribution

Label Distribution Protocol is a major part of MPLS. Similar mechanisms for Label exchange existed in vendor implementations like Ipsilonâs Flow Management Protocol (IFMP), IBMâs Aggregate Route-based IP Switching (ARIS), and Ciscoâs Tag Distribution Protocol. LDP and labels are the foundation of Label Switching.

LDP has the following basic characteristics:

It provides an LSR discovery mechanism to enable LSR peers to find each other and establish communication
It defines four classes of messages: DISCOVERY, ADJACENCY, LABEL ADVERTISEMENT, and NOTIFICATION messages
It runs over TCP to provide reliable delivery of messages (with the exception of DISCOVERY messages
LDP label distribution and assignment may be performed in several different modes:


Unsolicited downstream versus downstream-on-demand label assignment
Order versus independent LSP control
Liberal versus conservative label retention

11. What's the difference between CR-LDP and RSVP-TE

CR-LDP and RSVP-TE are both signaling mechanisms used to support Traffic Engineering across an MPLS backbone. RSVP is a QoS signaling protocol that is an IETF standard and has existed for quite some time. RSVP-TE extends RSVP to support label distribution and explicit routing while CR-LDP proposed to extend LDP (designed for hop-by-hop label distribution to support QoS signaling and explicit routing). MPLS Traffic Engineering tunnels are not limited to IP route selection procedures and thus will spread network traffic more uniformly across the backbone taking advantage of all available links. A signaling protocol is required to set up these explicit MPLS routes or tunnels.


There are many similarities between CR-LSP and RSVP-TE for constraint-based routing. The Explicit Route Objects that are used are extremely similar. Both protocols use ordered Label Switched Path (LSP) setup procedures. Both protocols include some QoS information in the signaling messages to enable resource allocation and LSP establishment to take place automatically.

At the present time CD-LDP development has ended and RSVP-TE has emerged as the "winner" for traffic engineering protocols.


12. What is a "Forwarding Equivalency Class"?


Forwarding Equivalency Class (FEC) is a set of packets which will be forwarded in the same manner (e.g., over the same path with the same forwarding treatment). Typically packets belonging to the same FEC will follow the same path in the MPLS domain. While assigning a packet to an FEC the ingress LSR may look at the IP header and also some other information such as the interface on which this packet arrived. The FEC to which a packet is assigned is identified by a label.

One example of an FEC is a set of unicast packets whose network layer destination address matches a particular IP address prefix. A set of multicast packets with the same source and destination network layer addresses is another example of an FEC. Yet another example is a set of unicast packets whose destination addresses match a particular IP address prefix and whose Type of Service bits are the same

13. How are Label Switch Paths built?


A Label Switch Path (LSP) is a set of LSRs that packets belonging to a certain FEC travel in order to reach their destination. Since MPLS allows hierarchy of labels known as label stack, it is possible to have different LSPs at different levels of labels for a packet to reach its destination. So more formally, a LSP of a packet with a label of level m is a set of LSRs that a packet p has to travel at level m to reach its destination. Please refer to 3.15 of RFC 3031 - Multiprotocol Label Switching Architecture, for a very formal and complete definition.



g. What is the relationship between MPLS and the Interior Routing Protocol

Interior Gateway Protocols (IGP), such as OSPF and IS-IS, are used to defined reachability and the binding/mapping between FEC and next-hop address. MPLS learns routing information from IGP (e.g., OSPF, IS-IS). Link-state Interior Gateway Protocol is typically already running on large Corporations or Service Providers networks There are no changes required to IGP routing protocols to support MPLS, MPLS-TE, MPLS QoS, or MPLS-BGP VPNs.

14. What other protocols does MPLS support besides IP?


By definition, Multiprotocol Label Switching supports multiple protocols. At the Network Layer MPLS supports IPv6, IPv4, IPX and AppleTalk. At the Link Layer MPLS supports Ethernet, Token Ring, FDDI, ATM, Frame Relay, and Point-to-Point Links. It can essentially work with any control protocol other than IP and layer on top of any link layer protocol. In addition, development efforts have allowed MPLS to not only work over any data link layer protocol, but also to natively carry a data link layer protocol over IP, thus enabling services such as Ethernet over MPLS.

MPLS and ATM


15. What are the differences between MPLS and ATM?

MPLS brings the traffic engineering capabilities of ATM to packet-based network. It works by tagging IP packets with "labels" that specify a route and priority. It combines the scalability and flexibility of routing with performance and traffic management of layer 2 switching. It can run over nearly any transport medium (ATM, FR, POS, Ethernet...) instead of being tied to a specific layer-2 encapsulation. As it uses IP for its addressing, it uses common routing/signaling protocols (OSPF, IS-IS, RSVP...)


16. Does MPLS replace ATM?


MPLS was not designed to replace ATM but, the practical reality of the dominance of IP-based protocols coupled with MPLS's inherent flexibility has led many service providers to migrate their ATM networks to one based on MPLS.


MPLS can co-exist with ATM switches and eliminate complexity by mapping IP addressing and routing information directly into ATM switching tables. The MPLS label-swapping paradigm is the same mechanism that ATM switches use to forward ATM cells. For ATM-LSR the label swapping function is performed by the ATM forwarding component. Label information is carried in the ATM Header, specifically the VCI and VPI fields. MPLS provides the control component for IP on both the ATM switches and routers. For ATM switches PNNI, ATM ARP Server, and NHRP Server are replaced with MPLS for IP services. The ATM fowarding plane (i.e 53-byte cells) are preserved. PNNI may still used on ATM switches to provide ATM services for non-MPLS ports. Therefore, an IP+ATM switch delivers the best of both worlds; ATM for fast switching and IP protocols for IP services all in a single switch.


17. What is "Ships in the night"?


Some vendors support the running of MPLS and ATM in the same device. Generally speaking, these two processes run separately. A change in an MPLS path has no bearing on ATM virtual circuits. This practice is commonly referred to "ships in the night" since the two processes act alone. However, in some cases, there is some interaction between the two processes. For example, some vendors support a mechanism whereby a reservation of resources by a label switch path is detected by the ATM control mechanism to avoid resource conflicts.

"Ships in the night" is used as a transitioning mechanism as networks migrate their ATM control planes to MPLS. Networks initially preserve ATM for carrying time sensitive data traffic such as voice and video, and for connecting to non-MPLS enabled nodes, while concurrently running MPLS to carry data. Over time there will no longer be a need for separate ATM flows and therefore networks will only carry MPLS label-based traffic.


MPLS Traffic Engineering


18. What does MPLS traffic engineering accomplish?

Traffic engineering refers to the process of selecting the paths chosen by data traffic in order to balance the traffic load on the various links, routers, and switches in the network. Traffic engineering is most important in networks where multiple parallel or alternate paths are available.


A major goal of Internet Traffic Engineering is to facilitate efficient and reliable network operations while simultaneously optimizing network resource utilization and traffic performance.

The goal of TE is to compute a path from one given node to another (source routing), such that the path does not violate the constraints (e.g. Bandwidth/administrative requirements...) and is optimal with respect to some scalar metric. Once the path is computed, TE (a.k.a. Constraint based routing) is responsible for establishing and maintaining forwarding state along such a path.

19. What are the components of MPLS-TE?

In order to support Traffic engineering, besides explicit routing (source routing), the following components should be available:

Ability to compute a path at the source by taking into account all the constraints. To do so the source need to have all the information either available locally or obtained from other routers in the network (e.g. Network topology)

Ability to distribute the information about network topology and attributes associated with links throughout the network once the path is computed, need a way to support forwarding along such a path

Ability to reserve network resources and to modify link attributes (as the result of certain traffic taking certain routes)

MPLS TE leverages several foundation technologies:

Constraint shortest path first algorithm used in path calculation. This is a modified version of the well known SPF algorithm extended to constraints support

RSVP extension used to establish the forwarding state along the path, as well as to reserve resources along the path

Link state IGPs with extension (OSPF with Opaque LSAs, IS-IS with Link State Packets TLV (type, length, value)) keeping track of topology changes propagation


20. How does MPLS merge traffic flows?

MPLS allows the mapping from IP packet to forwarding equivalence class (FEC) to be performed only once at the ingress to an MPLS domain. A FEC is a set of packets that can be handled equivalently for the purpose of forwarding and thus is suitable for binding to a single label.


From a forwarding point of view, packets within the same subset are treated by the LSR in the same way, even if the packets differ from each other with respect to the information in the network layer header. The mapping between the information carried in the network layer header of the packets and the entries in the forwarding table of the LSR is many to one. That is packets with different content of their network layer headers could be mapped into the same FEC. (example of a FEC: set of unicast packets whose network layer destination address match a particular IP address prefix...)

21. How are loops prevented in MPLS networks?

Before focusing on MPLS loops prevention, let's introduce briefly the different loops handling schemes.

Generally speaking, loop handling can be split into two categories:

Loop prevention: provides methods for avoiding loops before any packets are sent on the path - i.e. Path Vector

Loop mitigation (survival+detection): minimize the negative effects of loopseven though short term transient loops may be formed. - i.e. Time-To-Live (TTL). If the TTL reaches 0, then the packet is discarded

Dynamic routing protocols which converge rapidly to non-looping paths

As far as loop mitigation is concerned, MPLS labeled packets may carry a TTL field that operates just like the IP TTL to enable packets caught in transient loops to be discarded.


However, for certain medium such as ATM and Frame Relay, where TTL is not available, MPLS will use buffer allocation as a form of loop mitigation. It is mainly used on ATM switches which have the ability to limit the amount of switch buffer space that can be consumed by a single VC.

Another technique for non TTL segment is the hop count approach: hop count information is carried within the Link Distribution Protocol messages [3]. It works like a TTL. Hop count will decrease by 1 for every successful label binding.

A third alternative adopted by MPLS is an optional loop detection technique called path vector. A path vector contains a list of the LSRs that label distribution control message has traversed. Each LSR which propagates a control packet (to either create or modify an LSP) adds its own identifier to the path vector list. Loop is detected when an LSR receives a message with a path vector that contains its own identifier. This technique is also used by the BGP routing protocol with its AS path attribute.

22. How does MPLS perform failure recovery?

When a link goes down it is important to reroute all trunks that were routed over this link. Since the path taken by a trunk is determined by the LSR at the start of the MPLS path (head end), rerouting has to be performed by the head end LSR. To perform rerouting, the head end LSR could rely either on the information provided by IGP or by RSVP/CR-LDP.

However, several MPLS-specific resiliency features havebeen developed including Fast Re-Route, RAPID, and Bidirectional Forwarding. See RFC 3469: "Framework for Multi-Protocol Label Switching (MPLS)-based Recovery" for additional information.

23. What differences are there in running MPLS in OSPF versus IS-IS environments?


This is not an MPLS question but an IGP (Interior Gateway Protocol) question. MPLS extensions, stated in IEFT RFC's, are supported for both OSPF and IS-IS. MPLS and BGP-VPN real-world deployments have been on both protocols for some time now.

There is much debate over which IGP is best. This is usually centered around scalability. The street word is that IS-IS is more scaleable than OSPF. That is, a single OSPF area can support 150 plus routers and a single IS-IS area can support 500 plus routers. However, very large IS-IS and OSPF networks have been deployed.

Ultimately, it is best to first understand the benefits and disadvantages of each protocol. Then use the customer / network requirements to choice the IGP which best suites your needs.

24. Can there be two or more Autonomous Systems within the same MPLS domain?

This is possible only under very restricted circumstances. Consider the ASBRs of two adjacent ASes. If either or both ASBRs summarize eBGP routes before distributing them into their IGP, or if there is any other set-up where the IGP routes cover a set of FECs which differs from that of the eBGP routes (and this would almost always be the case), then the ASBRs cannot forward traffic based on the top-level label. A similar argument applies to TE tunnels. Some traffic usually will be either IP forwarded by the ASBR, or forwarded based on a non-top-level label.

So there would usually be 2-3 MPLS forwarding domains if there were two ASes: one for each of the two ASes, and possibly one for the link between the two ASBRs (in the case that labelled packets instead of IP packets are forwarded between the two ASBRs).

Also, it's likely that the ASBRs could not be ATM-LSRs, as ATM-LSRs typically have limited or no capability of manipulating label stacks or forwarding unlabelled IP traffic.

Another example (thanks to Robert Raszuk) is with the multi-provider application of BGP+MPLS VPNs. As described earlier, there are usually no *top-level* LSPs established across the two (or more) provider ASes involved, so it can be argued that:

The two ASes are separate administrative domains. However there are some LSPs established across the two ASes, at a lower level in the label stack. So, it can be argued that


(1) and (2) are both true, which implies that different definitions of the boundary of the administrative domains can exist with respect to different levels in the label stack. It is also (in hindsight) obvious that different MPLS domain boundaries can exist with respect to different levels of the label stack.

MPLS VPNs

25. How does MPLS enable VPNs?

Since MPLS allows for the creation of "virtual circuits" or tunnels, across an IP network, it is logical that service providers would look to use MPLS to provision Virtual Private Network services. Several standards have been proposed to allow service providers to use MPLS to provision VPN services that isolate a customers traffic across the provider's IP network and provide secure end-to-end connectivity for customer sites.

It should be noted that using MPLS for VPNs simply provides traffic isolation, much like an ATM or Frame Relay service. MPLS currently has no mechanism for packet encryption, so if customer requirements included encryption, some other method, such as IPsec, would have to be employed. The best way to think of MPLS VPNs is to consider them the equivalent of a Frame Relay or ATM virtual circuit.

26. What alternatives are there for implementing VPNs over MPLS

There are multiple proposals for using MPLS to provision IP-based VPNs. One proposal (MPLS/BGP VPNs) enabled MPLS-VPNs via extensions to Border Gateway Protocol (BGP). In this approach, BGP propagates VPN-IPv4 information using the BGP multiprotocol extensions (MP-BGP) for handling these extended addresses. It propagates reachability information (VPN-IPv4 addresses) among Edge Label Switch Routers (Provider Edge router). The reachability information for a given VPN is propagated only to other members of that VPN. The BGP multiprotocol extensions identify the valid recipients for VPN routing information. All the members of the VPN learn routes to other members.

Another proposal for using MPLS to create IP-VPN's is based on the idea of maintaining separate routing tables for various virtual private networks and does not involve BGP.

Most implementations of Layer 3 MPLS-VPNs are based on RFC-2547.

27. What is the "Martini Draft'?

The "Martini Draft" actually refers to set of Internet drafts co-authored by Luca Martini. These drafts define how MPLS can be used to support Layer 2 transport services such as Ethernet, Frame Relay and/or ATM. Martini drafts define Layer 2 encapsulation methods, as well as Layer 2 transport signaling methods.

Many service providers wish to use MPLS to provision L2-based services to provide an easy migration for the current L2 service customers, while the providers migrate their networks to MPLS. Service providers can use standards such as Martini Draft to provide a myriad of services over their MPLS networks, so customers can simply choose the technology that is best suited to their environment.


The Psuedo Wire Emulation Edge-to-Edge (PWE3) working group is currently developing standards for Layer 2 encapsulation (including Draft-Martini and other supporting standards). Current working group drafts can be located at www.mplsrc.com/standards.shtml under the sub-heading "Layer 2 VPNs and Layer 2 Emulation."

28. What is a "Layer 2 VPN"

Layer 2 VPNs are an extension of the work being undertaken in the PWE3 working group. Layer 2 VPNs allow service providers to provision Layer 2 services such as Frame Relay, ATM and Ethernet between customer locations over an IP/MPLS backbone. Service providers can thus provision Layer 2 services over their IP networks, removing the need to maintain separate IP and Frame Relay/ATM network infrastructures. This allows service providers to simplify their networks and reduce operating expenses.

The IETF's "Layer 2 Virtual Private Networks (l2vpn)" working group is currently defining standards for provisioning Layer 2 VPN services. Current working group drafts can be located at www.mplsrc.com/standards.shtml under the sub-heading "Layer 2 VPNs and Layer 2 Emulation."

29. What is a Virtual Private LAN Service (VPLS)?

VPLS refers to a method for using MPLS to create virtual LAN services based on Ethernet. In this type of service, all edge devices maintain MAC address tables for all reachable end nodes, much in the same way as a LAN switch.

VPLS services enable enterprises to provide Ethernet reachability across geographic distances served by MPLS services. Several alternatives for enabling VPLS services are in development by the L2VPN working group. Please refer to drafts from that working group for additional information. Also see the Juniper Network's White Paper "VPLS: Scalable Transparent LAN Services."

30. Are MPLS-VPNs secure?

Among many network security professionals, the term "VPN" implies "encrypted" tunnels across a public network. Since MPLS-VPNs do not require encryption, there is often concern over the security implications of using MPLS to tunnel non-encrypted traffic over a public IP network. There are a couple of points to consider in this debate:

MPLS-VPN traffic is isolated by the use of tags, much in the same way ATM and Frame Relay PVCs are kept isolated in a public ATM/Frame Relay network. This implies that security of MPLS-VPNs is equivalent to that of Frame Relay or ATM public network services. Interception of any of these three types of traffic would require access to the service provider network.

MPLS-VPNs do not prohibit security. If security is an issue, traffic can be encrypted before it is encapsulated into MPLS by using a protocol such as IPSec or SSL.

The debate over MPLS security really comes down requirements of the customer. Customers comfortable with carrying their traffic over public ATM or Frame Relay services should have the same level of comfort with MPLS-VPN services. Customers requiring additional security should employ encryption in addition to MPLS.


MPLS Quality of Service

31. What kinds of QoS protocols does MPLS support?

MPLS supports the same QoS capabilities as IP. These mechanisms are IP Precedence, Committed Access Rate (CAR), Random Early Detection (RED), Weighted RED, Weighted Fair Queuing (WFQ), Class-based WFQ, and Priority Queuing. Proprietary and non-standard QoS mechanisms can also be support but are not guaranteed to interoperate with other vendors.

Since MPLS also supports reservation of Layer 2 resources, MPLS can deliver finely grained quality of service, much in the same manner as ATM and Frame Relay.


32. How do I integrate MPLS and DiffServ

DiffServ can support up to 64 classes while the MPLS shim label supports up to 8 classes. This shim header has a 3-bit field defined ãfor experimental use. This poses a problem. This Exp field is only 3 bits long, whereas the Diff-Serv field is 6 bits. There are different scenarios to work around this problem.

There are two alternatives that address this problem called Label-LSP and Exp-LSP models. But they introduce complexity into the architecture. The diffserv model essentially defines the interpretation of the TOS bits. As long as the IP precedence bits map to the Exp bits the same interpretation as the diffserv model can be applied to these bits. In the case where additional bits are used in the diffserv model, one can essentially use the label value to interpret the meaning of the remaining bits. Recognizing that 3 bits are sufficient to identify the required number of classes, the remaining bits in the diffserv model are used for identifying the drop priority and these drop priorities can be mapped into an L-LSP in which case the label identifies the drop priority while the exp bits identify the Class that the packet belongs to.

Many Service Provides have or will add just a few classes. This small enhancement will be hard enough to provision, manage and sell. This would be an effective strategy to get to market quickly with a value-added service.

The followings classes may be more appropriate for the initial deployment of MPLS QoS:

High-priority, low-latency "Premium" class (Gold Service)
Guaranteed-delivery "Mission-Critical" class (Silver Service)
Low-priority "Best-Effort" class (Bronze Service)

33. How do I integrate MPLS and ATM QoS ?

MPLS makes it possible to apply QoS across very large routed or switched networks because Service Providers can designate sets of labels that have special meanings, such as service class. Traditional ATM and Frame Relay networks implement CoS with point-to-point virtual circuits, but this is not scalable for IP networks. Placing traffic flows at the edge into service classes enables providers to engineer and manage classes throughout the network.

If service providers manage networks based on service classes, not point-to-point connections, they can substantially reduce the amount of detail they must track and increase efficiency without losing functionality. Compared to per-circuit management, MPLS-enabled CoS provides virtually all of the benefit with far less complexity. Using MPLS to establish IP CoS has the added benefit of eliminating per-VC configuration. The entire network is easier to provision and engineer.


Generalized MPLS

34. What is "Generalized MPLS" or "GMPLS"

From "Generalized Multi-Protocol Label Switching Architecture" "Generalized MPLS extends MPLS to encompass time-division (e.g. SONET ADMs), wavelength (optical lambdas) and spatial switching (e.g. incoming port or fiber to outgoing port or fiber)."

GMPLS represents a natural extension of MPLS to allow MPLS to be used as the control mechanism for configuring not only packet-based paths, but also paths in non-packet based devices such as optical switches, TDM muxes, and SONET/ADMs.

35. What are the components of GMPLS?

GMPLS introduces a new protocol called the "Link Management Protocol" or LMP. LMP runs between adjacent nodes and is responsible for establishing control channel connectivity as well as failure detection. LMP also verifies connectivity between channels.

Additionally, the IETF's "Common Control and Measurement Plane" working group (ccamp) is working on defining extensions to interior gateway routing protocols such as OSPF and IS-IS to enable them to support GMPLS operation.

36. What are the features of GMPLS?


GMPLS supports several features including:

Link Bundling - the grouping of multiple, independent physical links into a single logical link

Link Hierarchy - the issuing of a suite of labels to support the various requirements of physical and logical devices across a given path

Unnumbered Links - the ability to configure paths without requiring an IP address on every physical or logical interface

Constraint Based Routing - the ability to automatically provision additional bandwidth, or change forwarding behavior based on network conditions such as congestion or demands for additional bandwidth

37. What are the "Peer" and "Overlay" models?

GMPLS supports two methods of operation, peer and overlay. In the peer model, all devices in a given domain share the same control plane. This provides true integration between optical switches and routers. Routers have visibility into the optical topology and routers peer with optical switches. In the overlay model, the optical and routed (IP) layers are separated, with minimal interaction. Think of the overlay model as the equivalent of today's ATM and IP networks, where there is no direct connection between the ATM layer and the IP routing layer.

The peer model is inherently simpler and more scalable, but the overlay model provides fault isolation and separate control mechanisms for the physical and routed network layers, which may be more attractive to some network operators.

38. What is the "Optical Internetworking Forum"?

The Optical Internetworking Forum (OIF) is an open industry organization of equipment manufacturers, telecom service providers and end users dedicated to promote the global development of optical internetworking products and foster the development and deployment of interoperable products and services for data switching and routing using optical networking technologies.

An Introduction to the Optical Internetworking Forum White Paper can be found at http://www.oiforum.com/

Voice over MPLS

39. Can voice and video traffic be natively encapsulated into MPLS?

Yes. The MFA Alliance has released a bearer transport implementation agreement which can be viewed at http://www.mfaforum.org/VoMPLS_IA.pdf.

MPLS Management

40. How are MPLS networks managed?

Currently, most MPLS implementations are managed using CLI. Tools such as WANDL's NPAT simulator allow MPLS networks to be modeled prior to deployment.

Several companies in the operational support systems product space have introduced tools designed to ease MPLS network management and automatically provision LSPs.

41. Are there any MPLS-specific MIBs?

Yes. Several internet drafts have proposed creating MPLS-specific MIBS.

42. Is there open source MPLS code to test MPLS?

Yes. Several open source implementations of MPLS currently exist.


MPLS Training

43. What shows and conferences provide information on MPLS?

Several conferences are devoted to, or include presentations on MPLS. These include:

"MPLScon" held each May in New York City
"MPLS World Congress" held each February in Paris
"MPLS 200x" held each fall in Washington D.C.

MPLS Interoperability Testing

44. Are there any labs that are performing MPLS interoperability testing?

Several groups and organizations conduct MPLS interoperability testing, including:

The University of New Hampshire Interoperability Lab has set up a MPLS Consortium for vendors to test the interoperability of their products and to support MPLS standards development. More information is available on their web site at http://www.iol.unh.edu/consortiums/mplsServices/.

Isocore in Fairfax, VA conducts interoperability testing and hosts the "MPLS 200x" annual event each fall in Washington D.C.

The MFA Forum has conducted several GMPLS interoperability testing events at conferences such as SuperComm and Next Generation Networks.

EANTC AG is a vendor-neutral network test center located in Berlin, Germany and conducts independent MPLS interoperability testing

Photonic Internet Lab is supported by the Government of Japan and provides testing and simulation efforts for GMPLS development.

My Blog List

Networking Domain Jobs