Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Tuesday, May 31, 2011

Next Generation Internet

The Internet has had an enormous impact on people's lives around the world in the ten years since Google's founding. It has changed politics, entertainment, culture, business, health care, the environment and just about every other topic you can think of. Which got us to thinking, what's going to happen in the next ten years? How will this phenomenal technology evolve, how will we adapt, and (more importantly) how will it adapt to us? We asked ten of our top experts this very question, and during September (our 10th anniversary month) we are presenting their responses. As computer scientist Alan Kay has famously observed, the best way to predict the future is to invent it, so we will be doing our best to make good on our experts' words every day. - Karen Wickre and Alan Eagle, series editors

Historically, the Internet has been all about connectivity between computers and among people. The World Wide Web opened enormous opportunities and motivations for the injection of content into the Internet, and search engines, such as Google's, provided a way for people to find the right content for their interests. Of course, the Internet continues to develop: new devices will find their way onto the net and new ways to access it will evolve.

In the next decade, around 70% of the human population will have fixed or mobile access to the Internet at increasingly high speeds, up to gigabits per second. We can reliably expect that mobile devices will become a major component of the Internet, as will appliances and sensors of all kinds. Many of the things on the Internet, whether mobile or fixed, will know where they are, both geographically and logically. As you enter a hotel room, your mobile will be told its precise location including room number. When you turn your laptop on, it will learn this information as well--either from the mobile or from the room itself. It will be normal for devices, when activated, to discover what other devices are in the neighborhood, so your mobile will discover that it has a high resolution display available in what was once called a television set. If you wish, your mobile will remember where you have been and will keep track of RFID-labeled objects such as your briefcase, car keys and glasses. "Where are my glasses?" you will ask. "You were last within RFID reach of them while in the living room," your mobile or laptop will say.

The Internet will transform the video medium as well. From its largely programmed, scheduled and streamed delivery today, video will become an interactive medium in which the choice of content and advertising will be under consumer control. Product placement will become an opportunity for viewers to click on items of interest in the field of view to learn more about them including but not limited to commercial information. Hyperlinks will associate the racing scene in Star Wars I with the chariot race in Ben Hur. Conventional videoconferencing will be augmented by remotely controlled robots with an ability to move around, focus cameras and microphones, and perhaps even directly interact with the local environment under user control.

The Internet will also become more closely integrated with other parts of our daily lives, and it will change them accordingly. Power distribution grids, for example, will become a part of the Internet's information universe. We will be able to track and manage electrical power demand and our automobiles will participate in the generation as well as the consumption of electricity. By sharing information through the Internet about energy-consuming and energy-producing devices and systems, we will be able to make them more efficient.

A box of washing machine soap will become part of a service as Internet-enabled washing machines are managed by Web-based services that can configure and activate your washing machine. Scientific measurements and experimental results will be blogged and automatically entered into common data archives to facilitate the distribution, sharing and reproduction of experimental results. One might even imagine that scientific instruments could generate their own data blogs.

These are but a few examples of the way in which the Internet will continue to surround and serve us in the future. The flexibility we have seen in the Internet is a consequence of one simple observation: the Internet is essentially a software artifact. As we have learned in the past several decades, software is an endless frontier. There is no limit to what can be programmed. If we can imagine it, there's a good chance it can be programmed. The Internet of the future will be suffused with software, information, data archives, and populated with devices, appliances, and people who are interacting with and through this rich fabric.

And Google will be there, helping to make sense of it all, helping to organize and make everything accessible and useful.

Monday, May 30, 2011

Checking optical interface type


Sometimes you log onto a switch remotely and need to know what kind of optical interface the switches actually have plugged in. It’s pretty simple:

Switch#show interface gi1/0/25 capabilities
GigabitEthernet1/0/25
Model:                 WS-C3750G-24TS
Type:                  1000BaseLX SFP
Speed:                 1000
Duplex:                full
Trunk encap. type:     802.1Q,ISL
Trunk mode:            on,off,desirable,nonegotiate
Channel:               yes
Broadcast suppression: percentage(0-100)
Flowcontrol:           rx-(off,on,desired),tx-(none)
Fast Start:            yes
QoS scheduling:        rx-(not configurable on per port basis),
tx-(4q3t) (3t: Two configurable values and one fixed.)
CoS rewrite:           yes
ToS rewrite:           yes
UDLD:                  yes
Inline power:          no
SPAN:                  source/destination
PortSecure:            yes
Dot1x:                 yes

The most important things here is that you can see that we have a 1000BaseLX SFP installed.

Sunday, May 29, 2011

Different between Router and Layer 3 Switch

If this one is the interview question in cisco then u have to tell that switch it is ither l2 or l3 is best in lan 

where as the routers are best in WAN

Now a much complex approch layer 3 Routing Versus Layer 3 Switching.

It is important to understand the difference between Layer 3 routing and Layer 3 switching. Both terms are open to some interpretation; however, the distinction between both can perhaps be best explained by examining how an IP packet is routed. The process of routing an IP packet can be divided into two distinct processes:

Control plane—The control plane process is responsible for building and maintaining the IP routing table, which defines where an IP packet should be routed to based upon the destination address of the packet, which is defined in terms of a next hop IP address and the egress interface that the next hop is reachable from. Layer 3 routing generally refers to control plane operations.

Data plane—The data plane process is responsible for actually routing an IP packet, based upon information learned by the control plane. Whereas the control plane defines where an IP packet should be routed to, the data plane defines exactly how an IP packet should be routed. This information includes the underlying Layer 2 addressing required for the IP packet so that it reaches the next hop destination, as well as other operations required on for IP routing, such as decrementing the time-to-live (TTL) field and recomputing the IP header checksum. Layer 3 switching generally refers to data plane operations.

Saturday, May 28, 2011

Difference Between Static, Aggregate and Generate Routes in JUNOS.

In a nutshell:

- A static route is the most obvious. You need to be able to reach a certain prefix and you specify the next-hop. This is useful when you are not running dynamic routing protocols and/or when you want to override what a dynamic protocol dictates (since the protocol-preference for a static-route is lower -preferred- than that of any dynamic RP).

- An aggregate route is a route you define but which is not used for forwarding traffic (next-hop is discard or reject). It is purely used to advertise this router's connectivity which is why it needs at least one contributing route (a route which belongs to the advertised subnet but with a longer mask - these are the ones used to forward the traffic). Typically, the aggregate route would be advertised into BGP (if it is active thanks to contributing routes) - BGP does not like dealing with routes which are too specific - it prefers aggregates.

- A generated route is technically an aggregate route but which can be used to forward traffic. Traffic which matches the generated route (and not more specific routes) will be forwarded using the same next-hop as the first contributing route. A generated-route is typically combined with a policy to match which routes we want to be contributing and thus used as NHs. The generated-route is typically the default 0/0 with a policy matching to upstream routes - ie: provide connectivity if certain upstream routes exist.

Order of Preference of Nexthop in Generate Route.

- route with lowest protocol preference (eg: statics are preferred)
- route with numerically lowest IP address as tie breaker (eg: 10.1.1.0/24 is preferred over 11.1.1.0/30)



Generated routes are very similar to aggregate routes, with one exception: generated routes inherit a real next-hop interface, while aggregate routes only allow discard or reject as next-hops.
For example, the following config:
routing-options {
aggregate {
route 192.168.16.0/21;
}
generate {
route 10.0.0.0/16;
}
}
Results in:
lab@lab> show route protocol aggregate
inet.0: 57 destinations, 60 routes (57 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
10.0.0.0/16 *[Aggregate/130] 00:00:04
                   > via so-0/0/0.0 <<<<<<REAL NEXT HOP (inherited from contributor)
192.168.16.0/21 *[Aggregate/130] 1d 08:33:02
                         Reject <<<<<<Reject is the default (can be configured to discard)
Note that they both show up in the routing table as protcol type Aggregate.

Tuesday, May 24, 2011

Ping Tool for Troubleshooting

MetaPing is a Visual Ping Monitor that makes it quick and easy to keep an eye on the health of your network.
Features:
- Easy to read status for each host.
- Host grouping for quick problem analysis.
- Uptime for each host.



It's only purpose in life is to ping hosts and graph the RTT in realtime. This can be very useful if you are doing testing and need to see if devices go down when you make a change, or if you need a temporary monitoring tool.

You can download the tool here:
Metaping

Friday, May 20, 2011

Free Monitoring tool for SMB - Cisco Network Assistant

For managing these smaller networks that may not have the resources to purchase a suite such as Ciscoworks, Cisco has a free tool called Cisco Network Assistant which can be downloaded here http://www.cisco.com/cisco/software/release.html?mdfid=280771500&softwareid=280775097&release=5.6.2 (CCO Account required).

Unfortunately it doesn't support much of the older (or larger) equipment that you may have deployed, but odds are if you purchased it within the last couple of years this software can manage it. Additionally it can only manage 40 devices however you can pick and chose what you are going to manage with the software.

The list of features feels like reading those of its much more expensive Ciscoworks relative:
Configuration management
Troubleshooting advice
Inventory reports
Event notification
Network security settings
Task-based menu
File management
Drag-and-drop Cisco IOS Software upgrades
Even if you don't want to use all of the features, the interface graphing option can be helpful if you need to quickly see the utilization or errors on an interface. Additionally the configuration backup and restore can be very helpful. In short, if you have a Cisco network, its a small and free download that may save you some headaches.

Saturday, May 14, 2011

Troubleshooting Tools - Route Servers and Looking Glasses

One of my favorite Internet troubleshooting website's is http://traceroute.org. It hosts a collection of links to BGP Looking Glasses and Route Servers all over the Internet. This is extremely useful when trying to track down why people in various parts of the Internet cannot reach your public IP space by being able to make sure that the routes that you are advertising are propagated properly and that someone else is not stepping on your routes.

Most of the providers even have the ability to ping and traceroute from various points around the world as well which is sometimes helpful as well.

Friday, May 13, 2011

Cisco's Spending Spree: 18 Acquisitions

Cisco's CEO John Chambers recently acknowledged what so many in the industry have been saying -- that Cisco (NASDAQ:CSCO) has lost its focus in the technology industry. The networking giant has been spreading itself thin by dipping into markets such as consumer electronics at the expense of its dominance in markets that really matter to business. Certainly Cisco's recent acquisitions over the past few years have not all followed the straight path of providing networking and infrastructure to business. Here's a look at Cisco acquisitions: 18 over the last 33 months.








































Tuesday, May 10, 2011

MPLS Label Structure

A label is a short, four-byte, fixed-length, locally significant identifier, the which is Used to identify a Forwarding Equivalence Class (FEC). The label which is put on a particular packet represents the FEC to which that packet is assigned.



Label —Label Value (Unstructured), 20 bits . Carries the actual value of the label
0/2:IPv4/IPv6 Explicit NULL Label - must be sole label stack entry (forward based on IPv4/v6) .
1:Router Alert Label ;(need software process) .
3:Implicit NULL Label .

Exp —Experimental Use, 3 bits; currently used as a Class of Service (CoS) field.

S —Bottom of Stack, 1 bit . S-Bottom of Stack, 1 bit

TTL —Time to Live, 8 bits . TTL-Time to Live, 8 bits

Network Design - Simple Explanation

 
  • Three Layer Model:
    • Access layer: Where end users are connected. For intra-vlan routing.
    • Distribution Layer: where access layer switches are aggregated. For inter-VLAN routing
    • Core Layer: where distribution layer switches are aggregated. Center to all users.

  • Access layer;
    • Low cost per port
    • High port density
    • Scalable uplinks to higher layers.
    • Resiliency through multiple uplinks.
    • User access functions like VLAN, traffic and protocol filter and QOS.

  • Distribution Layer:
    • High L3 throughput for packet handling.
    • Access list, packet filters and Qos features.
    • Scalable and resilient high-speed links to access and core layers.
    • Acts as L3 boundary for access VLANs. Broadcast shouldn’t travel across Distribution layer.

  • Core Layer:
    • Very high L3 throughput.
    • Advanced QOS and L3 protocol functions.
    • Redundancy and resilience for HA.

  • Switch Block:
    • Collection of access layer switches together with distribution switches(2).
    • Sized based on traffic types and behavior, size and number of workgroups.
    • Need redundancy within switch block.
    • Broadcast from a PC should be confine within switch block.

  • Core block:
    • An enterprise/campus network backbone.
    • Collapsed Core: Distribution and core layer are unified. Router performs both layer functions
    • Dual Core: Two core routers and switch blocks are connected to core routers in redundant fashion.

Monday, May 9, 2011

Multicast Made Easy


Multicast Quick-Start Configuration Guide


http://www.cisco.com/en/US/tech/tk828/technologies_tech_note09186a0080094821.shtml

Configuring IP Multicast Routing
http://www.cisco.com/en/US/docs/ios/12_0/np1/configuration/guide/1cmulti.pdf

Multicast Deployment Made Easy
IP Multicast Planning and Deployment Guide
http://www.cisco.com/warp/public/cc/techno/tity/ipmu/tech/ipcas_dg.pdf


Internet Protocol (IP) Multicast
Technology Overview
http://www.cisco.com/warp/public/cc/pd/iosw/prodlit/ipimt_ov.pdf

Understanding OSPF External Route Path Selection


Courtesy - INE

What is the major difference in using an E1 route over an E2 route in OSPF?

This is actually a very common area of confusion and misunderstanding in OSPF. Part of the problem is that the vast majority of CCNA and CCNP texts teach the theory that for OSPF path selection of E1 vs E2 routes, E1 routes use the redistributed cost plus the cost to the ASBR, while with E2 routes only use the redistributed cost. When I just checked the most recent CCNP ROUTE text from Cisco Press, it specifically says that “[w]hen flooded, OSPF has little work to do to calculate the metric for an E2 route, because by definition, the E2 route’s metric is simply the metric listed in the Type 5 LSA. In other words, the OSPF routers do not add any internal OSPF cost to the metric for an E2 route.” While technically true, this statement is an oversimplification. For CCNP level, this might be fine, but for CCIE level it is not.

The key point that I’ll demonstrate in this post is that while it is true that “OSPF routers do not add any internal OSPF cost to the metric for an E2 route”, both the intra-area and inter-area cost is still considered in the OSPF path selection state machine for these routes.


First, let’s review the order of the OSPF path selection process. Regardless of a route’s metric or administrative distance, OSPF will choose routes in the following order:

Intra-Area (O)
Inter-Area (O IA)
External Type 1 (E1)
External Type 2 (E2)
NSSA Type 1 (N1)
NSSA Type 2 (N2)

To demonstrate this, take the following topology:



R1 connects to R2 and R3 via area 0. R2 and R3 connect to R4 and R5 via area 1 respectively. R4 and R5 connect to R6 via another routing domain, which is EIGRP in this case. R6 advertises the prefix 10.1.6.0/24 into EIGRP. R4 and R5 perform mutual redistribution between EIGRP and OSPF with the default parameters, as follows:

follows:
R4:
router eigrp 10
 redistribute ospf 1 metric 100000 100 255 1 1500
!
router ospf 1
 redistribute eigrp 10 subnets

R5:
router eigrp 10
 redistribute ospf 1 metric 100000 100 255 1 1500
!
router ospf 1
 redistribute eigrp 10 subnets

The result of this is that R1 learns the prefix 10.1.6.0/24 as an OSPF E2 route via both R2 and R3, with a default cost of 20. This can be seen in the routing table output below. The other OSPF learned routes are the transit links between the routers in question.

R1#sh ip route ospf
     10.0.0.0/24 is subnetted, 8 subnets
O E2    10.1.6.0 [110/20] via 10.1.13.3, 00:09:43, FastEthernet0/0.13
                 [110/20] via 10.1.12.2, 00:09:43, FastEthernet0/0.12
O IA    10.1.24.0 [110/2] via 10.1.12.2, 00:56:44, FastEthernet0/0.12
O E2    10.1.46.0 [110/20] via 10.1.13.3, 00:09:43, FastEthernet0/0.13
                  [110/20] via 10.1.12.2, 00:09:43, FastEthernet0/0.12
O IA    10.1.35.0 [110/2] via 10.1.13.3, 00:56:44, FastEthernet0/0.13
O E2    10.1.56.0 [110/20] via 10.1.13.3, 00:09:43, FastEthernet0/0.13
                  [110/20] via 10.1.12.2, 00:09:43, FastEthernet0/0.12

Note that all the routes redistributed from EIGRP appear on R1 with a default metric of 20. Now let’s examine the details of the route 10.1.6.0/24 on R1.

R1#show ip route 10.1.6.0
Routing entry for 10.1.6.0/24
  Known via "ospf 1", distance 110, metric 20, type extern 2, forward metric 2
  Last update from 10.1.13.3 on FastEthernet0/0.13, 00:12:03 ago
  Routing Descriptor Blocks:
    10.1.13.3, from 10.1.5.5, 00:12:03 ago, via FastEthernet0/0.13
      Route metric is 20, traffic share count is 1
  * 10.1.12.2, from 10.1.4.4, 00:12:03 ago, via FastEthernet0/0.12
      Route metric is 20, traffic share count is 1

As expected, the metric of both paths via R2 and R3 have a metric of 20. However, there is an additional field in the route’s output called the “forward metric”. This field denotes the cost to the ASBR(s). In this case, the ASBRs are R4 and R5 for the routes via R2 and R3 respectively. Since all interfaces are FastEthernet, with a default OSPF cost of 1, the cost to both R4 and R5 is 2, or essentially 2 hops.

The reason that multiple routes are installed in R1’s routing table is that the route type (E2), the metric (20), and the forward metric (2) are all a tie. If any of these fields were to change, the path selection would change.

To demonstrate this, let’s change the route type to E1 under R4’s OSPF process. This can be accomplished as follows:

R4#config t
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#router ospf 1
R4(config-router)#redistribute eigrp 10 subnets metric-type 1
R4(config-router)#end
R4#

The result of this change is that R1 now only installs a single route to 10.1.6.0/24, the E1 route learned via R2.

R1#show ip route 10.1.6.0
Routing entry for 10.1.6.0/24
  Known via "ospf 1", distance 110, metric 22, type extern 1
  Last update from 10.1.12.2 on FastEthernet0/0.12, 00:00:35 ago
  Routing Descriptor Blocks:
  * 10.1.12.2, from 10.1.4.4, 00:00:35 ago, via FastEthernet0/0.12
      Route metric is 22, traffic share count is 1

Note that the metric and the forward metric seen in the previous E2 route is now collapsed into the single “metric” field of the E1 route. Although the value is technically the same, a cost of 2 to the ASBR, and the cost of 20 the ASBR reports in, the E1 route is preferred over the E2 route due to the OSPF path selection state machine preference. Even if we were to raise the metric of the E1 route so that the cost is higher than the E2 route, the E1 route would be preferred:

R4#config t
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#router ospf 1
R4(config-router)#redistribute eigrp 10 subnets metric-type 1 metric 100
R4(config-router)#end
R4#

R1 still installs the E1 route, even though the E1 metric of 102 is higher than the E2 metric of 20 plus a forward metric of 2.

R1#show ip route 10.1.6.0
Routing entry for 10.1.6.0/24
  Known via "ospf 1", distance 110, metric 102, type extern 1
  Last update from 10.1.12.2 on FastEthernet0/0.12, 00:00:15 ago
  Routing Descriptor Blocks:
  * 10.1.12.2, from 10.1.4.4, 00:00:15 ago, via FastEthernet0/0.12
      Route metric is 102, traffic share count is 1

R1 still knows about both the E1 and the E2 route in the Link-State Database, but the E1 route must always be preferred:

R1#show ip ospf database external 10.1.6.0

            OSPF Router with ID (10.1.1.1) (Process ID 1)

                Type-5 AS External Link States

  Routing Bit Set on this LSA
  LS age: 64
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.4.4
  LS Seq Number: 80000003
  Checksum: 0x1C8E
  Length: 36
  Network Mask: /24
        Metric Type: 1 (Comparable directly to link state metric)
        TOS: 0
        Metric: 100
        Forward Address: 0.0.0.0
        External Route Tag: 0

  LS age: 1388
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.5.5
  LS Seq Number: 80000001
  Checksum: 0x7307
  Length: 36
  Network Mask: /24
        Metric Type: 2 (Larger than any link state path)
        TOS: 0
        Metric: 20
        Forward Address: 0.0.0.0
        External Route Tag: 0

This is the behavior we would expect, because E1 routes must always be preferred over E2 routes. Now let’s look at some of the commonly misunderstood cases, where the E2 routes use both the metric and the forward metric for their path selection.

First, R4’s redistribution is modified to return the metric-type to E2, but to use a higher metric of 100 than the default of 20:

R4#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#router ospf 1
R4(config-router)#redistribute eigrp 10 subnets metric-type 2 metric 100
R4(config-router)#end
R4#

The result on R1 is that the route via R4 is less preferred, since it now has a metric of 100 (and still a forward metric of 2) vs the metric of 20 (and the forward metric of 2) via R5.

R1#show ip route 10.1.6.0
Routing entry for 10.1.6.0/24
  Known via "ospf 1", distance 110, metric 20, type extern 2, forward metric 2
  Last update from 10.1.13.3 on FastEthernet0/0.13, 00:00:30 ago
  Routing Descriptor Blocks:
  * 10.1.13.3, from 10.1.5.5, 00:00:30 ago, via FastEthernet0/0.13
      Route metric is 20, traffic share count is 1

The alternate route via R4 can still be seen in the database.

R1#show ip ospf database external 10.1.6.0

            OSPF Router with ID (10.1.1.1) (Process ID 1)

                Type-5 AS External Link States

  Routing Bit Set on this LSA
  LS age: 34
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.4.4
  LS Seq Number: 80000004
  Checksum: 0x9D8B
  Length: 36
  Network Mask: /24
        Metric Type: 2 (Larger than any link state path)
        TOS: 0
        Metric: 100
        Forward Address: 0.0.0.0
        External Route Tag: 0

  Routing Bit Set on this LSA
  LS age: 1653
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.5.5
  LS Seq Number: 80000001
  Checksum: 0x7307
  Length: 36
  Network Mask: /24
        Metric Type: 2 (Larger than any link state path)
        TOS: 0
        Metric: 20
        Forward Address: 0.0.0.0
        External Route Tag: 0

This is the path selection that we would ideally want, because the total cost of the path via R4 is 102 (metric of 100 plus a forward metric of 2), while the cost of the path via R5 is 22 (metric of 20 plus a forward metric of 2). The result of this path selection would be the same if we were to change both routes to E1, as seen below.

R4#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#router ospf 1
R4(config-router)#redistribute eigrp 10 subnets metric-type 1 metric 100
R4(config-router)#end
R4#

R5#config t
Enter configuration commands, one per line.  End with CNTL/Z.
R5(config)#router ospf 1
R5(config-router)#redistribute eigrp 10 subnets metric-type 1
R5(config-router)#end
R5#

R1 still chooses the route via R5, since this has a cost of 22 vs R4’s cost of 102.

R1#show ip route 10.1.6.0
Routing entry for 10.1.6.0/24
  Known via "ospf 1", distance 110, metric 22, type extern 1
  Last update from 10.1.13.3 on FastEthernet0/0.13, 00:00:41 ago
  Routing Descriptor Blocks:
  * 10.1.13.3, from 10.1.5.5, 00:00:41 ago, via FastEthernet0/0.13
      Route metric is 22, traffic share count is 1

R1#show ip ospf database external 10.1.6.0

            OSPF Router with ID (10.1.1.1) (Process ID 1)

                Type-5 AS External Link States

  Routing Bit Set on this LSA
  LS age: 56
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.4.4
  LS Seq Number: 80000005
  Checksum: 0x1890
  Length: 36
  Network Mask: /24
        Metric Type: 1 (Comparable directly to link state metric)
        TOS: 0
        Metric: 100
        Forward Address: 0.0.0.0
        External Route Tag: 0

  Routing Bit Set on this LSA
  LS age: 45
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.5.5
  LS Seq Number: 80000003
  Checksum: 0xEB0D
  Length: 36
  Network Mask: /24
        Metric Type: 1 (Comparable directly to link state metric)
        TOS: 0
        Metric: 20
        Forward Address: 0.0.0.0
        External Route Tag: 0

R1#

Note that the E1 route itself in the database does not include the cost to the ASBR. This must be calculated separately either based on the Type-1 LSA or Type-4 LSA, depending on whether the route to the ASBR is Intra-Area or Inter-Area respectively.

So now this begs the question, why does it matter if we use E1 vs E2? Of course as we saw E1 is always preferred over E2, due to the OSPF path selection order, but what is the difference between having *all* E1 routes vs having *all* E2 routes? Now let’s at a case where it *does* matter if you’re using E1 vs E2.
R1’s OSPF cost on the link to R2 is increased as follows:

R1#config t
Enter configuration commands, one per line.  End with CNTL/Z.
R1(config)#interface Fa0/0.12
R1(config-subif)#ip ospf cost 100
R1(config-subif)#end
R1#

R4 and R5’s redistribution is modified as follows:

R4#config t
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#router ospf 1
R4(config-router)#redistribute eigrp 10 subnets metric-type 1 metric 99
R4(config-router)#end
R4#

R5#config t
Enter configuration commands, one per line.  End with CNTL/Z.
R5(config)#router ospf 1
R5(config-router)#redistribute eigrp 10 subnets metric-type 1 metric 198
R5(config-router)#end
R5#

Now R1’s routes to the prefix 10.1.6.0/24 are as follows: Path 1 via the link to R2 with a cost of 100, plus the link to R4 with a cost of 1, plus the redistributed metric of 99, making this total path a cost of 200. Next, Path 2 is available via the link to R3 with a cost of 1, plus the link to R5 with a cost of 1, plus the redistributed metric of 198, masking this total path a cost of 200 as well. The result is that R1 installs both paths equally:

R1#show ip route 10.1.6.0
Routing entry for 10.1.6.0/24
  Known via "ospf 1", distance 110, metric 200, type extern 1
  Last update from 10.1.12.2 on FastEthernet0/0.12, 00:02:54 ago
  Routing Descriptor Blocks:
  * 10.1.13.3, from 10.1.5.5, 00:02:54 ago, via FastEthernet0/0.13
      Route metric is 200, traffic share count is 1
    10.1.12.2, from 10.1.4.4, 00:02:54 ago, via FastEthernet0/0.12
      Route metric is 200, traffic share count is 1

Note that the database lists the costs of the Type-5 External LSAs as different though:

R1#show ip ospf database external 10.1.6.0

            OSPF Router with ID (10.1.1.1) (Process ID 1)

                Type-5 AS External Link States

  Routing Bit Set on this LSA
  LS age: 291
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.4.4
  LS Seq Number: 80000006
  Checksum: 0xC9C
  Length: 36
  Network Mask: /24
        Metric Type: 1 (Comparable directly to link state metric)
        TOS: 0
        Metric: 99
        Forward Address: 0.0.0.0
        External Route Tag: 0

  Routing Bit Set on this LSA
  LS age: 207
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.5.5
  LS Seq Number: 80000004
  Checksum: 0xE460
  Length: 36
  Network Mask: /24
        Metric Type: 1 (Comparable directly to link state metric)
        TOS: 0
        Metric: 198
        Forward Address: 0.0.0.0
        External Route Tag: 0

What happens if we were to change the metric-type to 2 on both R4 and R5 now? Let’s see:

R4(config)#router ospf 1
R4(config-router)#redistribute eigrp 10 subnets metric-type 2 metric 99
R4(config-router)#end
R4#

R5#config t
Enter configuration commands, one per line.  End with CNTL/Z.
R5(config)#router ospf 1
R5(config-router)#redistribute eigrp 10 subnets metric-type 2 metric 198
R5(config-router)#end
R5#

Even though the end-to-end costs are still the same, R1 should now prefer the path with the lower redistributed metric via R4:

R1#show ip route 10.1.6.0
Routing entry for 10.1.6.0/24
  Known via "ospf 1", distance 110, metric 99, type extern 2, forward metric 101
  Last update from 10.1.12.2 on FastEthernet0/0.12, 00:01:09 ago
  Routing Descriptor Blocks:
  * 10.1.12.2, from 10.1.4.4, 00:01:09 ago, via FastEthernet0/0.12
      Route metric is 99, traffic share count is 1

The forward metric of this route means that the total cost is still 200 (the metric of 99 plus the forward metric of 101). In this case, even though both paths are technically equal, only the path with the lower redistribution metric is installed. Now let’s see what happens if we do set the redistribution metric the same.

R4#config t
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#router ospf 1
R4(config-router)#redistribute eigrp 10 subnets metric-type 2 metric 1
R4(config-router)#end
R4#

R5#config t
Enter configuration commands, one per line.  End with CNTL/Z.
R5(config)#router ospf 1
R5(config-router)#redistribute eigrp 10 subnets metric-type 2 metric 1
R5(config-router)#end
R5#

Both routes now have the same metric of 1, so both should be installed in R1’s routing table, right? Let’s check:

R1#show ip route 10.1.6.0
Routing entry for 10.1.6.0/24
  Known via "ospf 1", distance 110, metric 1, type extern 2, forward metric 2
  Last update from 10.1.13.3 on FastEthernet0/0.13, 00:00:42 ago
  Routing Descriptor Blocks:
  * 10.1.13.3, from 10.1.5.5, 00:00:42 ago, via FastEthernet0/0.13
      Route metric is 1, traffic share count is 1

This is the result we may not expect. Only the path via R5 is installed, not the path via R4. Let’s look at the database and see why:

R1#show ip ospf database external 10.1.6.0

            OSPF Router with ID (10.1.1.1) (Process ID 1)

                Type-5 AS External Link States

  Routing Bit Set on this LSA
  LS age: 56
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.4.4
  LS Seq Number: 80000008
  Checksum: 0xB3D4
  Length: 36
  Network Mask: /24
        Metric Type: 2 (Larger than any link state path)
        TOS: 0
        Metric: 1
        Forward Address: 0.0.0.0
        External Route Tag: 0

  Routing Bit Set on this LSA
  LS age: 47
  Options: (No TOS-capability, DC)
  LS Type: AS External Link
  Link State ID: 10.1.6.0 (External Network Number )
  Advertising Router: 10.1.5.5
  LS Seq Number: 80000006
  Checksum: 0xAADD
  Length: 36
  Network Mask: /24
        Metric Type: 2 (Larger than any link state path)
        TOS: 0
        Metric: 1
        Forward Address: 0.0.0.0
        External Route Tag: 0

Both of these routes show the same cost, as denoted by the “Metric: 1”, so why is one being chosen over the other? The reason is that in reality, OSPF External Type-2 (E2) routes *do* take the cost to the ASBR into account during route calculation. The problem though is that by looking at just the External LSA’s information, we can’t see why we’re choosing one over the other.

Now let’s go through the entire recursion process in the database to figure out why R1 is choosing the path via R5 over the path to R4.

First, as we saw above, R1 finds both routes to the prefix with a metric of 1. Since this is a tie, the next thing R1 does is determine if the route to the ASBR is via an Intra-Area path. This is done by looking up the Type-1 Router LSA for the Advertising Router field found in the Type-5 External LSA.

R1#show ip ospf database router 10.1.4.4

            OSPF Router with ID (10.1.1.1) (Process ID 1)
R1#show ip ospf database router 10.1.5.5

            OSPF Router with ID (10.1.1.1) (Process ID 1)
R1#

This output on R1 means that it does not have an Intra-Area path to either of the ASBRs advertising these routes. The next step is to check if there is an Inter-Area path. This is done by examining the Type-4 ASBR Summary LSA.

R1#show ip ospf database asbr-summary 10.1.4.4

            OSPF Router with ID (10.1.1.1) (Process ID 1)

                Summary ASB Link States (Area 0)

  Routing Bit Set on this LSA
  LS age: 1889
  Options: (No TOS-capability, DC, Upward)
  LS Type: Summary Links(AS Boundary Router)
  Link State ID: 10.1.4.4 (AS Boundary Router address)
  Advertising Router: 10.1.2.2
  LS Seq Number: 80000002
  Checksum: 0x24F3
  Length: 28
  Network Mask: /0
        TOS: 0  Metric: 1 

R1#show ip ospf database asbr-summary 10.1.5.5

            OSPF Router with ID (10.1.1.1) (Process ID 1)

                Summary ASB Link States (Area 0)

  Routing Bit Set on this LSA
  LS age: 1871
  Options: (No TOS-capability, DC, Upward)
  LS Type: Summary Links(AS Boundary Router)
  Link State ID: 10.1.5.5 (AS Boundary Router address)
  Advertising Router: 10.1.3.3
  LS Seq Number: 80000002
  Checksum: 0x212
  Length: 28
  Network Mask: /0
        TOS: 0  Metric: 1

This output indicates that R1 does have Inter-Area routes to the ASBRs R4 and R5. The Inter-Area metric to reach them is 1 via ABRs R2 (10.1.2.2) and R3 (10.1.3.3) respectively. Now R1 needs to know which ABR is closer, R2 or R3? This is accomplished by looking up the Type-1 Router LSA to the ABRs that are originating the Type-4 ASBR Summary LSAs.

R1#show ip ospf database router 10.1.2.2

            OSPF Router with ID (10.1.1.1) (Process ID 1)

                Router Link States (Area 0)

  Routing Bit Set on this LSA
  LS age: 724
  Options: (No TOS-capability, DC)
  LS Type: Router Links
  Link State ID: 10.1.2.2
  Advertising Router: 10.1.2.2
  LS Seq Number: 8000000D
  Checksum: 0xA332
  Length: 36
  Area Border Router
  Number of Links: 1

    Link connected to: a Transit Network
     (Link ID) Designated Router address: 10.1.12.2
     (Link Data) Router Interface address: 10.1.12.2
      Number of TOS metrics: 0
       TOS 0 Metrics: 1

R1#show ip ospf database router 10.1.3.3

            OSPF Router with ID (10.1.1.1) (Process ID 1)

                Router Link States (Area 0)

  Routing Bit Set on this LSA
  LS age: 1217
  Options: (No TOS-capability, DC)
  LS Type: Router Links
  Link State ID: 10.1.3.3
  Advertising Router: 10.1.3.3
  LS Seq Number: 80000010
  Checksum: 0x9537
  Length: 36
  Area Border Router
  Number of Links: 1

    Link connected to: a Transit Network
     (Link ID) Designated Router address: 10.1.13.1
     (Link Data) Router Interface address: 10.1.13.3
      Number of TOS metrics: 0
       TOS 0 Metrics: 1

This output indicates that R2 and R3 are adjacent with the Designated Routers 10.1.12.2 and 10.1.13.3 respectively. Since R1 is also adjacent with these DRs, the cost from R1 to the DR is now added to the path.

R1#show ip ospf database router 10.1.1.1

OSPF Router with ID (10.1.1.1) (Process ID 1)

Router Link States (Area 0)

LS age: 948
Options: (No TOS-capability, DC)
LS Type: Router Links
Link State ID: 10.1.1.1
Advertising Router: 10.1.1.1
LS Seq Number: 8000000F
Checksum: 0x6FA6
Length: 60
Number of Links: 3

Link connected to: a Stub Network
(Link ID) Network/subnet number: 10.1.1.1
(Link Data) Network Mask: 255.255.255.255
Number of TOS metrics: 0
TOS 0 Metrics: 1

Link connected to: a Transit Network
(Link ID) Designated Router address: 10.1.13.1
(Link Data) Router Interface address: 10.1.13.1
Number of TOS metrics: 0
TOS 0 Metrics: 1

Link connected to: a Transit Network
(Link ID) Designated Router address: 10.1.12.2
(Link Data) Router Interface address: 10.1.12.1
Number of TOS metrics: 0
TOS 0 Metrics: 100

R1 now knows that its cost to the DR 10.1.12.2 is 100, who is adjacent with R2, whose cost to R4 is 1, whose redistributed metric is 1. R1 also now knows that its cost to the DR 10.1.13.3 is 1, who is adjacent with R3, whose cost to R5 is 1, whose redistributed metric is 1. This means that the total cost to go to 10.1.6.0 via the R1 -> R2 -> R4 path is 102, while the total cost to go to 10.1.6.0 via the R1 -> R3 -> R5 path is 3.

The final result of this is that R1 chooses the shorter path to the ASBR, which is the R1 -> R3 -> R5 path. Although the other route to the prefix is via an E2 route with the same external cost, one is preferred over another due to the shorter ASBR path.

Based on this we can see that both E1 and E2 routes take both the redistributed cost and the cost to the ASBR into account when making their path selection. The key difference is that E1 is always preferred over E2, followed by the E2 route with the lower redistribution metric. If multiple E2 routes exist with the same redistribution metric, the path with the lower forward metric (metric to the ASBR) is preferred. If there are multiple E2 routes with both the same redistribution metric and forward metric, they can both be installed in the routing table. Why does OSPF do this though? Originally this stems from the design concepts of “hot potato” and “cold potato” routing.

Think of a routing domain learning external routes. Typically those prefixes have some “external” metric associated with them – for example, E2 external metric or the BGP MED attribute value. If the routers in the local domain select the exit point based on the external metric they are said to perform “cold potato” routing. This means that the exit point is selected based on the external metric preference, e.g. distances to the prefix in the bordering routing system. This optimizes link utilization in the external system but may lead to suboptimal path selection in the local domain. Conversely, “hot potato” routing is the model where the exit point selection is performed based on the local metric to the exit point associated with the prefix. In other words, “hot potato” model tries to push packets out of the local system as quick as possible, optimizing internal link utilization.

Now within the scope of OSPF, think of the E2 route selection process: OSPF chooses the best exit point based on the external metric and uses the internal cost to ASBR as a tie breaker. In other words, OSPF performs “cold potato” routing with respect to E2 prefixes. It is easy to turn this process into “hot potato” by ensuring that every exit point uses the same E2 metric value. It is also possible to perform other sorts of traffic engineering by selectively manipulating the external metric associated with the E2 route, allowing for full flexibility of exit point selection.

Finally, we approach E1. This type of routing is a hybrid of hot and cold routing models – external metrics are directly added to the internal metrics. This implicitly assumes that external metrics are “comparable” to the internal metrics. In turn, this means E1 is meant to be used with another OSPF domain that uses a similar metric system. This is commonly found in split/merge scenarios where you have multiple routing processes within the same autonomous system, and want to achieve optimum path selection accounting for both metrics in both systems. This is similar to the way EIGRP performs metric computation for external prefixes.

So there we have it. While it is technically true that “OSPF routers do not add any internal OSPF cost to the metric for an E2 route”, both the intra-area and inter-area cost can still be considered in the OSPF path selection regardless of whether the route is E1 or E2.

Sunday, May 8, 2011

CCIE Notes - Multicast


Multicast notes:
  • class D - 224.0.0.0/4 (224.0.0.0 – 239.255.255.255)
  • reserved (like rfc 1918) – 224.0.0.0/24 (224.0.0.0 -224.0.0.255)
  • Administratively Scoped Block  – 239.0.0.0-239.255.255.255
  1. hello’s are sent every 30 sec to 224.0.0.13
  2. loopback – ip ospf point-to-point for RPF check
  3. (S,G) source tree / shortest path tree / source
    (*,G) shared tree / any source
  4. holdtime = 3.5x the hello
  5. highest IP wins DR
  6. lowest IP wins designated querier
  7. rp-address (unicast) must be advertised in unicast IGP
  8. mtrace to group address to see the reverse path
  9. traffic is always sent to the group address, never from.
  10. the source ip is always a unicast ip address, never a mcast address.
  11. igmp = router to client (automatically enabled with PIM)
  12. pim = router to router ( relies on unicast routing domain, do make sure you have full igp connectivity)
  13. sparse-mode – explicit join ( no traffic unless uyou request it) need’s an RP.
  14. dense-mode – implicit join (gets all traffic unlexx you don’t want it), flood and prune
  15. enabe mcast = ip multicast-routing (distributed on 3560)
  16. (*,G) don’t care about the source.  (S,G) knows the source
    • incoming null / outgoing null – does not know the source.
  17. Enable PIM on the shortest path to the Rp or you will get RFP failures.
  18. switching from a shared tree (*,G) to a  shortest path tree (S,G) = SPT switchover
  19. if theRPF fails the packet is dropped.
    • ping
    • sh ip mroute count
    • debup ip packet
    • ip mroute to rpf failure interface.
  20. RP
      • auto -rp
        • ip pim sparse-dense
        • ip pim send-rp-announce loopback scope 16
          ip pim send-rp-discovery scope 16
        • p pim autorp listener – use when you have sparse-mode interfaces/all sparse mode router.
        • fallback to dense mode is he default, to prevent it use : ip pim dm-fallback.
        • 224.0.1.39 (announce) and 224.0.1.40(discovery)
        • Candidate RPs advertize their willingness to be an RP via “RP-announcement” messages. These messages are periodically sent to a reserved well-known group 224.0.1.39 (CISCO-RP-ANNOUNCE).
        • RP mapping agents join group 224.0.1.39 and map the RPs to the associated groups. The RP mapping agents advertise the authoritative RP-mappings to another well-known group address 224.0.1.40 (CISCO-RP-DISCOVERY). All PIM routers join 224.0.1.40 and store the RP-mappings in their private cache.
        • deny statements – the groul ill be negatively cached and run int dense mode.
        • control updates with ip multicast boundry
        • For the Auto-RP with Multiple RPs scenario, no load balancing is provided, and, when an RP changes, convergence is normally on the order of 3 minutes.
      • Bootstrap router
        • ip pim sparse-mode
          ip pim bsr-candidate
          ip pim rp-candidate
        • use hash to load balance
        • multiples overlapping RP’s = highest priority wins
        • control updates with ip pim bsr-border
    • static – ip pim rp-address – you need this on all the mcast devices.
      • override – will override AUTORP or BSR rp mappings.
    • dynamic – auto-rp (cisco prorietary)  or BSR
  21. ip pim NBMA = use on hub in a FR network, to bypass split-horizon behavior.
  22. GRE is the duct tape of routing!!!  makse your tunnel interfaces passive.
  23. troubleshootin mcast:
    • 1  int s0/0 no ip mroute-cache -2- debup ip mpacket
    • debup ip pim
    • sh ip pim nei | rp | rp mapping | interface
    • sh ip pin int f0/0 detail
    • debup ip pim auto-rp <-shows you what RP is filtered.
    • sh run | in ip pim|int
    • keyword search under ip pim command reference
  24. ip helper-map
    • convert from mcast group to broadcast
      • ip multi helper-map 224.1.1.1 150.100.200.255 111
      • acecss-list 111 oer udp host 150.100.255.1 a eq 39000
  25. anycast – 2 RP’s with the same IP address’
    • r1
      int lo0
      ip add 10.0.0.1 255.255.255.255
      int lo1
      ip add 10.1.1.1 255.255.255.255
      ip msdp peer 10.1.1.2 connect-sour loo1
      ip msdp originator-id loo1
      ip pim rp-address 1.1.1.1 [acl]
      r2
      int lo0
      ip add 10.0.0.1 255.255.255.255
      int lo1
      ip add 10.1.1.2 255.255.255.255
      ip msdp peer 10.1.1.1 connect-sour loo1
      ip msdp originator-id loo1
      ip pim rp-address 1.1.1.1 [acl]

Thursday, May 5, 2011

25 Facts Partners Should Know About Juniper Networks

1> The company was founded by a former Xerox principal scientist, Pradeep Sindu, who came up with the idea for Juniper after a sabbatical working at Xerox's Palo Alto Research Center.

2> Sindu started his company with $200,000 in venture capital money and a team compiled of veterans from Sun, MCI and StrataCom.

3> Juniper went public in 1999 with a splash: its first day valuation nearly tripled from start to finish.

4> Last year Juniper made $4.093 billion in revenue.

5> Back in 2009, Juniper announced that it would cut executive pay to fuel a 15 percent increase in R&D.

6> Through first quarter of 2011, Juniper's stock has risen 23 percent in the last year and net revenue increased by 21 percent.

7> Juniper employs more than 8,700 employees worldwide.

8> The networking company has offices in 53 countries that service more than 30,000 customers and partners worldwide.

9> Juniper's customer base includes 130 global service providers and 96 of the Global Fortune 100.

10> Since April 2010, Juniper has acquired six companies--an impressive feat for an organically grown company that had only 11 acquisitions to its name prior to then.

11> Juniper's 2010 spending spree included laying out close to $100 million for SMobile, $152 million for Trapeze Networks and $95 million for Altor Networks.

12> Most recently the company purchased Brilliant Telecommunications, a manufacturer of packet-based, network synchronization equipment and monitoring solutions, for approximately $4.5 million in February.

13> Before it's recent spending spree, the last acquisition the company made was way back in 2005 – a five year gap.

14> The company's lack of M&A activity in that interim could likely be partially attributed to a four-year-long investigation into stock option backdating that ended only in February 2010--the company paid $169 million to settle a class action suit.

15> Juniper's long-time rivalry with Cisco for supremacy in the Internet core router market has been dubbed by some as the "Core Wars."

16> Today Juniper is taking yet another shot across the bow of Cisco (and HP) with its "Switch to Juniper" campaign, which is offering partners aggressive margins and discounts for its switches and switch bundles.

17> Earlier this year, Juniper stole away channel veteran Luanne Tierney from Cisco, where she worked for 15 years.

18> Tierney will take over as vice president of global partner marketing at Juniper.

19> Most recently, Juniper also poached Nawaf Bitar from Cisco. A security expert picked up in Cisco's 2007 IronPort acquisition, Bitar starts at Juniper as senior vice president and general manager of emerging technologies.

20> One of Tierney's first public moves came at the Juniper partner conference this month, with the announcement of a new co-marketing plan for its channel partners.

21> This comes on the heels of an announcement in January that the company would launch a new continuing education program for partners to further refine its Juniper Learning Academy.

22> Juniper prides itself on its clean channel dealings, touting the fact that over 96 percent of its North American enterprise business goes through distribution.

23> Last year Juniper's top partners reported double-digit growth with its networking and security product base.

24> At its partner conference in April, Juniper said it increased its market development fund spending by 20 percent in the past year.

25> Additionally, 40 percent of all MDF spending is set aside for marketing in 2011.

Juniper Networks Publishes "Day One" How-To Library to Help Simplify Network Configuration and Operation

Juniper Networks (NYSE: JNPR) announced today that it is offering a library of Day One how-to books that provide easy to understand explanations with step-by-step instructions for the configuration and management of a range of Juniper Networks products. Designed to help customers unlock the simplicity, security and automation benefits of the new network, the guides cover a variety of topics and are available in several different formats.

Currently available topics in the Day One library include the fundamentals of the Junos® operating system, including Junos automation, as well as the basics of fabric and switching technologies, services gateways and advanced networking technologies. The guides provide a useful reference tool for the configuration of Juniper Networks® Universal WAN solution, which include the MX Series 3D Universal Edge Router and the SRX Series Services Gateway that enable complete branch connectivity in less than 10 minutes. The Day One books also include guides on such topics as configuring MPLS, MBGP multicast VPNs and IPv6 as well as migrating to OSPF for Juniper Networks routers. Additional titles will be added on an ongoing basis.

"Juniper's Day One how-to guides are designed to provide the information that network managers need to get started on the first day that they work with a new Juniper product or solution," said Mike Marcellin, vice president of marketing and business strategy, Platform Systems Group at Juniper Networks. "The books provide step-by-step instructions and tips in a straight-forward manner from the hands-on experience of our expert authors, so that it's easy to start and finish a networking activity in one day. With more than 100,000 downloads of the guides so far, we believe our customers are finding that they provide a time and cost-saving service in a convenient format."

The Day One book series are available for free download in PDF format at www.juniper.net/dayone. Select titles also feature a copy and paste editor for direct placement of Junos configurations. Titles are available in eBook format for iPads and iPhones from the Apple iBookstore. The titles are also available for download to Kindles, Androids, BlackBerry, Macs and PCs by visiting the Kindle Store. In addition, print copies are available for sale from Amazon or Vervante.

Wednesday, May 4, 2011

Cisco Virtual Private LAN Service (VPLS): Interview Questions - Part 1

Q. What is VPLS?

A. A. VPLS stands for Virtual Private LAN Service, and is a VPN technology that enables Ethernet multipoint services (EMSs) over a packet-switched network infrastructure. VPN users get an emulated LAN segment that offers a Layer 2 broadcast domain. The end user perceives the service as a virtual private Ethernet switch that forwards frames to their respective destinations within the VPN. Ethernet is the technology of choice for LANs due to its relative low cost and simplicity. Ethernet has also gained recent popularity as a metropolitan-area network (MAN or metro) technology.

VPLS helps extend the reach of Ethernet further to be used as a WAN technology. Other technologies also enable Ethernet across the WAN, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over SONET/SDH, Ethernet bridging over ATM, and ATM LAN emulation (LANE). However, they only provide point-to-point connectivity and their mass deployment is limited by high levels of complexity, or they require dedicated network architectures that do not facilitate network convergence. Figure 1 shows the logical view of a VPLS connecting three sites. Each customer edge device requires a single connection to the network to get full connectivity to the remaining sites.

 Figure 1

Logical View of a VPLS



 Q. What does it mean that VPLS enables an EMS?
A. A multipoint technology allows a user to reach multiple destinations through a single physical or logical connection. This requires the network to make a forwarding decision based on the destination of the packet. Within the context of VPLS, this means that the network makes a forwarding decision based on the destination MAC address of the Ethernet frame. A multipoint service is attractive because less connections are required to achieve full connectivity between multiple points. An equivalent level of connectivity based on a point-to-point technology requires a much larger number of connections or the use of suboptimal packet forwarding.

Q. Q. What are the main components of VPLS?

A. A. In its simplest form, a VPLS consists of several sites connected to provider edge devices implementing the emulated LAN service. These provider edge devices make the forwarding decisions between sites and encapsulate the Ethernet frames across a packet-switched network using a virtual circuit or pseudo wire. A virtual switching instance (VSI) is used at each provider edge to implement the forwarding decisions of each VPLS. The provider edges use a full mesh of Ethernet emulated circuits (or pseudowires) to forward the Ethernet frames between provider edges.

Figure 2 illustrates the components of a VPLS that connects three sites.

Figure 2 Figure 2

VPLS Components



Q. How are packets forwarded in VPLS?

A. Ethernet frames are switched between provider edge devices using the VSI forwarding information. Provider edge devices acquire this information using the standard MAC address learning and aging functions used in Ethernet switching. The VSI forwarding information is updated with the MAC addresses learned from physical ports and other provider edge devices via virtual circuits. These functions imply that all broadcast, multicast, and destination-unknown MAC addresses are flooded over all ports and virtual circuits associated with a VSI. Provider edge devices use split-horizon forwarding on the virtual circuits to form a loop-free topology. In this way, the full mesh of virtual circuits provides direct connectivity between the provider edge devices in a VPLS, and no protocols have to be used to generate a loop-free topology (Spanning Tree Protocol, for example).

Q. What are the signaling requirements of VPLS?

A. Two functional components in VPLS involve signaling—provider edge discovery and virtual circuit setup. Cisco® VPLS currently relies on static configuration of provider edge associations within a VPLS. However, the architecture can be easily enhanced to support several discovery protocols, including Border Gateway Protocol (BGP), RADIUS, Label Distribution Protocol (LDP), or Domain Name System (DNS). The virtual circuit setup uses the same LDP signaling mechanism defined for point-to-point services. Using a directed LDP session, each provider edge advertises a virtual circuit label mapping that is used as part of the label stack imposed on the Ethernet frames by the ingress provider edge during packet forwarding.

Q. How is reachability information distributed in a VPLS?

A. Cisco VPLS does not require the exchange of reachability (MAC addresses) information via a signaling protocol. This information is learned from the data plane using standard address learning, aging, and filtering mechanisms defined for Ethernet bridging. However, the LDP signaling used for setting up and tearing down the virtual circuits can be used to indicate to a remote provider edge that some or all MAC addresses learned over a virtual circuit need to be withdrawn from the VSI. This mechanism provides a convergence optimization over the normal address aging that would eventually flush the invalid addresses.

Q. Can VPLS be implemented over any packet network?

A. VPLS has been initially specified and implemented over an MPLS transport. From a purely technical point of view, the provider edge devices implementing VPLS could also transport the Ethernet frames over an IP backbone using different encapsulations, including generic routing encapsulation (GRE), Layer 2 Tunneling Protocol (L2TP), and IP Security (IPSec).

Q. Q. Are there any differences in the encapsulation of Ethernet frames across the packet network between VPLS and Any Transport over MPLS (AToM)?

A. No. VPLS relies on the same encapsulation defined for point-to-point Ethernet over MPLS. The frame preamble and frame check sequence (FCS) are removed, and the remaining payload is encapsulated with a control word, a virtual circuit label, and an Interior Gateway Protocol (IGP) or transport label.

Q. Is VPLS limited to Ethernet?

A. Even though most VPLS sites are expected to connect via Ethernet, they may connect using other Layer 2 technologies (ATM, Frame Relay, or Point-to-Point Protocol [PPP], for example). Sites connecting with non-Ethernet links exchange packets with the provider edge using a bridged encapsulation. The configuration requirements on the customer edge device are similar to the requirements for the Ethernet interworking in point-to-point Layer 2 services.

Q. Are there any scalability concerns with VPLS?

A. Packet replication and the amount of address information are the two main scaling concerns for the provider edge device. When packets need to be flooded (because of broadcast, multicast, or destination-unknown unicast address), the ingress provider edge needs to perform packet replication. As the number of provider edge devices in a VPLS increases, the number of packet copies That need to be generated increases. Depending on the hardware architecture, packet replication can have an important impact on processing and memory resources. In Addition, the number of MAC addresses That May be learned from the data plane May grow rapidly if many hosts connects to the VPLS. This situation can be alleviated by avoiding large, flat, network domains in the VPLS.

Q. What is hierarchical VPLS?

 A. A hierarchical model can be used to improve the scalability characteristics of VPLS. Hierarchical VPLS (H-VPLS) reduces signaling overhead and packet replication requirements for the provider edge. Two types of provider edge devices are defined in this model—user-facing provider edge (u-PE) and network provider edge (n-PE). Customer edge devices to u-PES connects directly and aggregate VPLS traffic before it reaches the n-PE, WHERE the VPLS forwarding takes place based on the VSI. In this hierarchical model, u-PEs are expected to support Layer 2 switching and to perform normal bridging functions. Cisco VPLS uses 802.1Q Tunneling, a double 802.1Q or Q-in-Q encapsulation, to aggregate traffic Between the u-PE and n-PE. The Q-in-Q trunk becomes an access port to a VPLS instance on an n-PE (Figure 3).

Figure 3

Hierarchical VPLS



 Q. How does VPLS fit with metro Ethernet?

A. VPLS can play an important role to scale metro Ethernet services by increasing geographical coverage and service capacity. The H-VPLS model allows service providers to interconnect dispersed metro Ethernet domains to extend the geographical coverage of the Ethernet service. H-VPLS helps scale metro Ethernet services beyond the 4000-subscriber limit imposed by the VLAN address space. Conversely, having an Ethernet access network contributes to the scalability of VPLS by distributing packet replication and reducing the signaling requirements. Metro Ethernet and VPLS are complementary technologies that enable more sophisticated Ethernet service offerings.

Q. Is Cisco VPLS standards-based?

A. Cisco VPLS is based on the IETF draft draft-ietf-pppvpn-vpls-ldp, which has wide industry support. VPLS specifications are still under development at the IETF. There are two proposed VPLS drafts (draft-ietf-pppvpn-vpls-ldp and draft-ietf-l2vpn-vpls-bgp). There are no current plans to support both drafts.

Q. How does VPLS compare with Cisco AToM?

A. Cisco AToM provides a standards-based implementation that enables point-to-point Layer 2 services. VPLS complements the portfolio of Layer 2 services with a multipoint offerings based on Ethernet. These two kinds of services impose different requirements for the provider edge devices. A point-to-point service relies on a virtual circuit (or pseudowire) That the provider edges set up to transport Layer 2 frames Between two attachment circuits. The mapping between attachment circuits and virtual circuits is static and one-to-one. A multipoint service requires the provider edge to perform a lookup on the frame contents (typically, MAC addresses) to determine the virtual circuit to be Used to forward the frames to the destination. This lookup creates the multipoint nature of a VPLS. This lookup creates the multipoint nature of a VPLS. The virtual circuit signaling and encapsulation characteristics performed by the provider devices are the same. The operation of the provider edge devices is transparent from the type of service implemented by the devices.

Q. How does VPLS compare with MPLS VPNs?

A. VPLS and MPLS (Layer 3) VPN enable two very different services. VPLS offers a multipoint Ethernet service cans That higher-level support multiple protocols. MPLS VPN also offers a multipoint service, but it is limited to the transport of IP traffic and all traffic that can be carried over IP. Both VPLS and MPLS VPN support multiple link technologies for the customer edge to provider edge connection (Ethernet, Frame Relay, ATM, PPP, and so on). VPLS, however, imposes additional requirements (bridged encapsulation) on the customer edge devices in order to support non-Ethernet links. VPLS, however, imposes additional requirements (bridged encapsulation) on the customer edge devices in order to support non-Ethernet links. MPLS VPN reduces the amount of IP routing design and operation required from the VPN user. VPLS leaves full control of IP routing to the VPN user. VPLS and MPLS VPN are two alternatives to implement a VPN. The selection of the appropriate VPN technology requires analysis of the specific service requirements of the VPN customer.

Q. Does VPLS preclude the use of the same network infrastructure for services such as Layer 3 VPNs (L3VPNs), point-to-point Layer 2 VPNs (L2VPNs), and Internet services?

A. No. MPLS allows service providers to deploy a converged network infrastructure that supports multiple services. Provider edge devices are required to implement the signaling and encapsulation requirements for any specific services. However, those devices do not have to be dedicated to a single service. Furthermore, the provider of devices in the core of the network do not need to be aware of the service a packet is associated with. Provider devices are service- and customer-agonistic, giving the MPLS backbone unique scalability characteristics.

Juniper Debuts New Routers

Juniper tries a pay as-you-grow approach with new MX 5, MX 10 and MX 40 routing platforms.

Juniper (NYSE:JNPR) is expanding its edge router lineup with its new Universal Edge WAN solution. The general idea behind the Universal Edge is to deliver to enterprises a scalable routing platform that can grow as demands require.

The Universal Edge includes the MX5, MX10 and MX40 platforms. The MX5 provides up to 20 Gbps of system capacity, the MX10 scales to 40 Gbps, while the MX40 delivers 60 Gbps of capacity.
"It's pay as you grow, so you can start with the MX5 and grow all the way to the MX80 and the whole way you have service flexibility and the ability to create high-performance service tunnels," Alan Sardella, product marketing director, Platform Systems Group at Juniper told InternetNews.com.

Juniper is taking a pay-as-you-grow approach with the capability for enterprises to scale an MX5 all the way up to an MX80 by way of interface and license upgrades.

The MX80 was announced back in 2009 and provides 80 Gbps of system capacity. As it turns out the MX5, 10 and 40 share a lot in common with MX80.

"You should think of the MX5, 10 and 40 as the same chassis and packet forwarding engine and everything except for the interface availability is the same," Sardella said. "There is a much lower base price to start out with on the MX5 and what happens is the interfaces that you buy are the ones that are enabled."

Sardella added that the MX5's approach to interfaces makes it more appropriate for WAN routing applications.

"These are all derivatives of the MX80 platform," Sardella said.

The MX80 and its derivates all also include the Juniper Networks J-Web solution which enables the rapid deployment of a branch.

"J-Web enables router setup via a GUI to create a branch office very rapidly," Sardella said. "You can configure the devices with all the interfaces, routing protocols and you're just forwarding packets immediately."

The new MX80 based platforms can also be leveraged as part of Juniper's architectural approach for creating flatter networks. In February, Juniper announced Qfabric as a new cloud computing fabric that aims to reduce network layers and complexity.
"The QFabric QFX is the single layer that collapses the access, aggregation and the core and that all happens within a single location or site," Sardella said. "But what you need is a high performance WAN in order to keep traffic flowing in a way that is not disruptive to end users, that where these routers comes in."

Force10 Networks launches new Z-Series ZettaScale core switches

Leverages FTOS Force10 OS, a modular OS software with Ethernet, IPv4, IPv6, and MPLS protocols to deliver resiliency and scalability

Force10 Networks, a provider of data centre networking services has launched new Z-Series ZettaScale core switches that offers a choice between centralised and distributed core infrastructure.

The new Z9000 distributed core switch delivers scalability and performance in an ultra-small form factor while the Z9512 chassis-based switch delivers high density and switching capacity in a half-rack unit.

The Z9000 features a 2.5 Tbps switching capacity, which is purpose-built for use in a leaf-and-spine architecture, where it can scale from 2 to 160 Terabits in a distributed core architecture with latency as low as 3 microseconds.

Further, in a distributed architecture, the Z9000 enables a true 'Plug-and-Play' fabric, any of the switches can be taken out of service without bringing down the core network.

The Z9512 switch delivers 9.6 Terabits of switching capacity, ensuring ample capacity for large centralised data centre designs.

The Z9512 is a chassis-based switch that offers 480 line-rate, non-blocking 10GbE ports, 96 line-rate, non-blocking 40 GbE ports and 48 line-rate, non-blocking 100 GbE ports in a 19RU form factor.
In addition, its 9.6 Tbps switching capacity and an initial 400 Gbps per-slot switching capacity (four times the slot capacity of other switches), sub-5 microsecond latency and an 8-Gigabyte packet buffer for each of its 12 line cards, delivers performance in a centralised core switch.

Both the Z9000 and Z9512 leverage the FTOS Force10 OS, a mature and modular OS software with feature-rich Ethernet, IPv4, IPv6, and MPLS protocols to deliver resiliency and scalability.

Both the products support Force10's Open Automation and Virtualisation Framework, which incorporates automated provisioning and scripting and enables the switches to communicate with virtual machines inside virtualised servers.

Force10 chief marketing officer Arpit Joshipura said data centre operators need new ways to accommodate changing traffic demands, and one type of product doesn't meet all needs, and the Z-Series products allow customers to design architectures their own way and to build faster facilities.

The company said that the Z9000 switch will be available for customer shipments in July 2011 while the Z9512 switch will be available in the second half of 2011.

My Blog List

Networking Domain Jobs