Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Monday, January 31, 2011

Why ITIL is still the heartbeat of IT services

Its perspective at the sharp end of ITIL projects means APM Group is well placed to see the benefits of implementing ITIL. Jessica Barry, APMG’s accreditor co-ordinator, explains why ITIL is still an extremely useful business tool and Version 3 is exactly what the industry needs to continue improving.

The issues surrounding ITIL and service management generate some heated debates.  On this website alone there are lots of examples of people who are pro-ITIL and evangelise on its benefits, but there are just as many who think it is removed from the working realities faced by service providers.
The APM Group has the role of accreditor for the ITIL scheme.  We accredit the examination institutes who work with training companies who in turn help end users adopt and embed ITIL.  We also manage the examination scheme, so we are well placed to understand the debate about ITIL, and answer some of the more burning questions that people are discussing.

Our view, naturally, is that ITIL is an extremely useful method, especially with the added dimensions offered by Version 3.  Our latest figures show that about 500,000 V3 certificates have been issued to candidates and every month there are more candidates taking the qualifications than the previous month.  The international appeal of ITIL V3 is also extending, and we have translated papers into 20 languages.

 Translations are triggered when there is demand – usually measured by a significant number of Foundation examinations in a particular country.  itSMF’s international chapters push for translations when they’ve got candidates asking for them – this in itself is a testament to ITIL’s value.

Demand for ITIL Masters

We are currently formulating the ITIL Master level of the certification scheme.  The ITIL Master is a high level qualification where candidates demonstrate practical application of their ITIL knowledge.  At Expert level you need to prove you know the guidance and the processes, but Masters goes further.  The IT service management industry told us they needed and wanted the Masters qualification because this level of maturity was missing in previous schemes.

We’ve got several ITIL experts devising the certification including: Sharon Taylor, Dave Cannon, Kevin Holland, Carol Hulm, Dave Wheeldon, and Vernon Lloyd.  They all have a wealth of experience in assessing individuals and defining what constitutes a robust and skilled ITIL knowledge and experience base.  We expect Masters to be widely available from mid 2011.

First hand ITIL experience

One of the candidates taking part in the Masters Pilots is Mathieu Mathieu Notéris, senior ITSM consultant at Sogeti Belux, a provider of IT services in Belgium and Luxembourg.  Sogeti helps clients develop, implement and manage practical IT solutions and Mathieu believes ITIL certifications are the evidence that you are able ‘to do IT’.  He got involved in the pilots because he wanted ‘to stay on top of the wave’ and believes the qualifications give service managers credibility.  Mathieu particularly likes the fact that the ITIL Masters level is based on the candidate’s real-life experience of service management.

“If you are a manager or high-level senior consultant this is a way to establish your own value based on real-life, proven experience and not just on an artificial business cases as was the case with the Service Manager certification from Version 2.  Results and evidence are there.  Lessons learned are real.  To argue, convince and therefore manage, you definitely need to be convinced yourself,” Mathieu says.

But he doesn’t think ITIL is just about the process and says we shouldn’t forget that behind the machines, the programmes, the business and the processes, there are human beings.  He says service management is as much about the discussions, factoring in different opinions, confronting issues, persuading and being persuaded, as it is about process.

He advises service managers to take all the opportunities open to them to enhance their learning and try to remember all lessons: both positive as well as negative.  “One remembered bad situation is a situation you’ll be able to detect, recognise and hopefully avoid in the future,” he says.

The Ten Golden Rules of Change Management

We all know that nothing stays the same – no matter how much we might wish it did. In business, there will be new services introduced and unpopular ones discontinued in response to business requirements.
Technology is no different and new applications will need to be incorporated and others dissolved. Equally, the workforce ebbs and flows with new employees to allow access, even internally bringing people in and out of project teams. Change happens, it’s a fact of life, but it needs to be handled competently if the desired benefits are to be realized.

When it comes to reflecting these changes in IT systems, complexity and a lack of process has trumped best practices. Organizations are ready to take action – but the enormity of the situation means many do not know where to start, or even how.

Where does it all go wrong?

We’ve all been there when we’ve arrived bright and breezy on a Monday morning, only to find that a change that has been implemented to the system over the weekend causes things not to work as they should. The result is IT staff, with their backs firmly up against the wall, scramble around the network, making random changes without planning or authorization, in a desperate effort to find and "fix" the fault. The reality is that all of these little tweaks can cause additional problems that may not come to light at first, instead rearing up at a later date when no connection to the original change is made, causing the system to fail.

A more worrying issue is when these changes aren’t identified. There have been numerous examples of how failing to control the process has led to breaches and compliance violations that can be traced back to misconfigured systems.

How can we make it right?

There are many steps that should be followed when defining a change management process. Our top tips for handling this are :

Step 1: Graphically build existing workflow processes within your organization. Ideally they should fit IT Infrastructure Library (ITIL) guidelines, or at the very least the organization’s pre-defined processes.

Step 2: Define how changes are requested and what supporting documentation is required. This could be as simple as an e-mail request to a triplicate form that is completed and officially submitted to a pre-determined person(s).

Step 3: Define the change management process so everyone knows what will happen and by when. This should include how changes are prioritized, the timeframes involved, how they can be tracked, how they’ll be implemented. As part of this step, you should also define the appeals process. It should cover how a person is informed that their request has been declined, the reason why it failed and what happens next.

Step 4: If you don’t already have one, establish a change advisory board (CAB). This should include a representative from each area of the business. This team will have responsibility for reviewing all requested changes, checking the change is complete, that it meets business needs, the impact on other areas of the organization both adversely or positively, and finally whether doing so would introduce risks. If declined, this needs to be communicated back to the originator with the reasons why. If approved, the team would then be responsible for communicating the change to those affected by it.

Step 5: Design the change. During this stage it is important that any conflicts or insecurities are identified and rectified to avoid expensive repercussions at a later point. It may be prudent for this role to be split in two with an administrator to verify that the change remains within corporate compliance. This is easier said than done for some changes because in making a change to meet one standard you could quite easily be in breach of another so this role can be key in determining what is and isn’t allowed. There is technology available that can help with this veritable minefield to automate, check and flag potential compliance conflicts.

Step 6: Implement and document the change. In an ideal world, the person who implements the change should be different from the person who designed it to avoid a conflict of interest.

Step 7: Verify that the change has been made, that it has been executed correctly and that only authorized changes have been implemented.

Step 8: Have a backup plan. If a change has been implemented that has had an adverse effect on the system, rather than blindly making changes, it should be reversed, reassessed and re-implemented once the point at which it failed has been identified and rectified.

Step 9: Audit the change process. It is important to check that approved improvements have been made – after all they’ve been identified as beneficial to the organization.
Step 10: Regularly reflect on the change management process to identify any sticking areas that can be ironed out.

Manual vs Automated

Many organizations still rely on manual documentation of their network configurations. This then means that access requests are also manually processed, which opens up a number of pitfalls:

Extended costs of manual changes: Manual requests, typically, can take anywhere up to 10 days, which is time that could be used elsewhere. A further issue is the time wasted checking, verifying and implementing unnecessary or duplicated changes that an automated process would have identified.

Risks of something breaking: As the workflow is all on "paper," it is virtually impossible to check all potential break points without automating the process. Sometimes the team will have spent a great deal of time engineering the change upfront for it to be denied by the risk team because approval is sought too late in the process.

Lack of audit and accountability: As everything is on paper, and often hurried, processes will go out the window with paperwork submitted after the change has been implemented, if at all. This can be as basic as having no idea who requested the change or why it was needed in the first place.

Impossible to define and enforce compliance: With just documentation to determine what the organization’s compliance requirements are, administrators are left to use their own personal judgment to determine whether a new rule introduces risks.

Quality of service: As it takes longer to submit and implement changes, the service is poor – both internally and often externally. This can lead to revenue loss – for example, failing to provide access to an online sales system or CRM that has a revenue potential means that every week implementation is delayed, more money is lost.

As this article demonstrates, change is happening frequently and organizations need to keep pace and manage the process completely if they’re to reap the rewards.

Whether you do so manually, or invest in technology to give you a winning edge, time and tide wait for no man. You need to be able to react to changes in your environment, competently, for your team to come out fighting.

Sunday, January 30, 2011

VLAN Configuration Comparision - Cisco & Juniper


If you are familiar with Cisco switches, and using Cisco switches fairly well, this post will help you to boost your confidence as a Juniper network engineer.

If you can configure a Cisco switch, then you also can configure a JUNOS based switch :)  here you go..

 IOS
#vlan-database
(vlan)#vlan 5 name Internet
(vlan)#vlan 6 name Intranet
(vlan)#apply
JUNOS
set vlan Internet vlan-id 5
set vlan Intranet vlan-id 6

Assign an IP addressto a VLAN:
IOS
(config)#interface vlan 5
(config-if)#ip address 10.10.10.254 255.255.255.0

JUNOS
set interfaces vlan unit 5 family inet address 10.10.10.254/24
set vlan Internet l3-interface vlan.5

Assigning a port to aVLAN (Access):
IOS
(config)#interface fastEthernet 2/2
(config-if)#switchport mode access
(config-if)#switchport access vlan 5

JUNOS
set interfaces fe-2/0/2 unit 0 family ethernet-switching port-mode access
set interfaces fe-2/0/2 unit 0 family ethernet-switching vlan members Intranet

Assigning a port to aVLAN (6 Trunked with 5 Native)
IOS
(config)#interface fastEthernet 2/2
(config-if)#switchport mode trunk
(config-if)#switchport trunk encapsulation dot1q
(config-if)#switchport trunk native vlan 5
(config-if)#switchport trunk allowed vlan 5,6

JUNOS
set interfaces fe-2/0/2 unit 0 family ethernet-switching port mode trunk
set interfaces fe-2/0/2 unit 0 family ethernet-switching native-vlan-id 5
set interfaces fe-2/0/2 unit 0 family ethernet-switching vlan members 6
  

BGP Confederations – How, What and Why.

During your BGP studies, you’ll come across BGP confederations a couple of times. There are a few things that are easy to miss, and I’d like to clear it up here.

This will be both theory and practical, and I’ll be using this topology to explain things:



Our company, AS 65535 has a multitude of routers running BGP in our core.

N.B: R2 and R4 will NOT be running BGP at all. We are connected to 2 ISP’s – AS100 and AS200. We are also running OSPF internally inside the organisation.

BGP confederations allow your BGP deployment to scale quite nicely internally. Remember the rule of BGP split horizon – i.e. a BGP router learning a route from an iBGP peer will not advertise that to another iBGP peer. Confederations can help with this, as each intra-confederation connection is actually a special eBGP peering and not a regular iBGP peer.

BGP confederations can also help with splitting up your IGP domains. IGP’s like EIGRP or OSPF cannot scale to gigantic routing table sizes. IGP’s also put more emphasis on convergence speed as opposed to stability like BGP. I know the topology I have is no-where near big enough, but it does allow me to show you how it splits these IGP domains. I am going to run OSPF in both Sub-AS 10 and 30, as well as EIGRP in 10, 20 and 30 so I can seperate the OSPF portion out completely. I am going to be running OSPF in area 0 in Sub-AS 10 as well as in 30, but these will be completely independent of each other.

Each router has a loopback which will be advertised. R1 is 1.1.1.1, R4 is 4.4.4.4 and so on. All iBGP and intra-confederation peers will be peered using the loopback IP addresses.

Configuring:

The ISP itself will have a normal BGP config, nothing special needs to be done. You do need to ensure you are configuring a peer with AS 65535. ISP1 and ISP2 do not know anything about the fact that we are running a confederation.

R1 config:

R1#
router bgp 100
no synchronization
bgp log-neighbor-changes
network 1.1.1.1 mask 255.255.255.255
neighbor 192.168.1.8 remote-as 65535
no auto-summary

R8′s config is like so. The BGP process must be configured under the Sub-AS number. In this case AS 10. The peer connectino between ISP1 and our company will NOT come up until I tell R8 that it should identify itself to ISP1 as being in AS 65535. As soon as the confederation identifier is in place, the peer connection will come up. BGP confederation peers just tells the router itself which AS’s are intra-confederation peers. If you do not add this then the router will assume any AS different to the one it’s using itself will be a full eBGP peer.

R1#
router bgp 10
 no synchronization
 bgp log-neighbor-changes
 bgp confederation identifier 65535
 bgp confederation peers 20 30
 network 8.8.8.8 mask 255.255.255.255
 neighbor 9.9.9.9 remote-as 10
 neighbor 9.9.9.9 update-source Loopback0
 neighbor 192.168.1.1 remote-as 100
 no auto-summary
!
router ospf 1
 log-adjacency-changes
 network 8.8.8.8 0.0.0.0 area 0
 network 10.1.1.16 0.0.0.3 area 0
 network 192.168.1.0 0.0.0.255 area 0

I’ve also added the next hop addresses into OSPF so I don’t need to use next-hop-self.
To do a quick check on the peer connection, have a look here:

R1#sh ip bgp sum

Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
192.168.1.8     4 65535      17      17        4    0    0 00:10:06        1

The peer is up, and as far as R1 is concerned, R8 is in AS 65535
R2 is simply running OSPF and nothing else:

R2#
router ospf 1
 log-adjacency-changes
 network 2.2.2.2 0.0.0.0 area 0
 network 10.1.1.16 0.0.0.3 area 0
 network 10.1.1.20 0.0.0.3 area 0

Router 9 is running BGP, OSPF and EIGRP – I wouldn’t do this in the real world. It’s simply to prove a point later. It’s also peered with AS20, a Sub-AS. There is an important thing to note here. Basically iBGP sessions do not need the ‘ebgp-multihop’ command. iBGP peers do NOT have to be directly connected. When Sub-AS’s connect to each other they DO need it though otherwise the peer will simply not come up. You can see that the peer config to R8 does not have it while the peer config to R10 does have it. This is the config:

R9#
router bgp 10
 no synchronization
 bgp log-neighbor-changes
 bgp confederation identifier 65535
 bgp confederation peers 20 30
 network 9.9.9.9 mask 255.255.255.255
 neighbor 8.8.8.8 remote-as 10
 neighbor 8.8.8.8 update-source Loopback0
 neighbor 10.10.10.10 remote-as 20
 neighbor 10.10.10.10 ebgp-multihop 2
 neighbor 10.10.10.10 update-source Loopback0
 no auto-summary
!
router ospf 1
 log-adjacency-changes
 network 9.9.9.9 0.0.0.0 area 0
 network 10.1.1.20 0.0.0.3 area 0
 network 10.1.1.96 0.0.0.3 area 0
!
router eigrp 1
 network 9.9.9.9 0.0.0.0
 network 10.1.1.96 0.0.0.3
 no auto-summary

Router10 is peered with 2 other Sub-AS’s. It’s also running EIGRP:

#R10
router bgp 20
 no synchronization
 bgp log-neighbor-changes
 bgp confederation identifier 65535
 bgp confederation peers 10 30
 network 10.10.10.10 mask 255.255.255.255
 neighbor 3.3.3.3 remote-as 30
 neighbor 3.3.3.3 ebgp-multihop 2
 neighbor 3.3.3.3 update-source Loopback0
 neighbor 9.9.9.9 remote-as 10
 neighbor 9.9.9.9 ebgp-multihop 2
 neighbor 9.9.9.9 update-source Loopback0
 no auto-summary
!
router eigrp 1
 network 10.1.1.36 0.0.0.3
 network 10.1.1.96 0.0.0.3
 network 10.10.10.10 0.0.0.0
 no auto-summary

R3, R4, R11 and R12 are more of the same of what’s just been done. I’ll post just the configs here.

#R3
R3#sh run | begin eigrp
router eigrp 1
 network 3.3.3.3 0.0.0.0
 network 10.1.1.36 0.0.0.3
 auto-summary
!
router ospf 1
 log-adjacency-changes
 network 3.3.3.3 0.0.0.0 area 0
 network 10.1.1.36 0.0.0.3 area 0
 network 10.1.1.44 0.0.0.3 area 0
!
router bgp 30
 no synchronization
 bgp log-neighbor-changes
 bgp confederation identifier 65535
 bgp confederation peers 10 20
 neighbor 10.10.10.10 remote-as 20
 neighbor 10.10.10.10 ebgp-multihop 2
 neighbor 10.10.10.10 update-source Loopback0
 neighbor 11.11.11.11 remote-as 30
 neighbor 11.11.11.11 update-source Loopback0
 no auto-summary
R4#
router ospf 1
 log-adjacency-changes
 network 4.4.4.4 0.0.0.0 area 0
 network 10.1.1.44 0.0.0.3 area 0
 network 10.1.1.52 0.0.0.3 area 0
#R11
router ospf 1
 log-adjacency-changes
 network 10.1.1.52 0.0.0.3 area 0
 network 11.11.11.11 0.0.0.0 area 0
 network 172.20.1.0 0.0.0.255 area 0
!
router bgp 30
 no synchronization
 bgp log-neighbor-changes
 bgp confederation identifier 65535
 bgp confederation peers 10 20
 network 11.11.11.11 mask 255.255.255.255
 neighbor 3.3.3.3 remote-as 30
 neighbor 3.3.3.3 update-source Loopback0
 neighbor 172.20.1.12 remote-as 200
 no auto-summary
#R12
router bgp 200
 no synchronization
 bgp log-neighbor-changes
 network 12.12.12.12 mask 255.255.255.255
 neighbor 172.20.1.11 remote-as 65535
 no auto-summary

Now there are a couple things we need to note about these special BGP peerings. Usually, the next-hop address will change when an update is given to an eBGP peer. If we check R10′s BGP table though, we can see that the next-hop addresses have NOT changed: (192.168.1.1 is R1′s IP address; 172.20.1.12 is R12′s)

R10#sh ip bgp
BGP table version is 8, local router ID is 10.10.10.10
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*  1.1.1.1/32       192.168.1.1              0    100      0 (10) 100 i
*  8.8.8.8/32       8.8.8.8                  0    100      0 (10) i
r> 9.9.9.9/32       9.9.9.9                  0    100      0 (10) i
*> 10.10.10.10/32   0.0.0.0                  0         32768 i
*  11.11.11.11/32   11.11.11.11              0    100      0 (30) i
*  12.12.12.12/32   172.20.1.12              0    100      0 (30) 200 i

That means updates to confederation peers will have the next-hop stay the same. You need to ensure that those next hop addresses are known by all confederation peers otherwise you’ll get what I have above, most have no valid route.

If we check the BGP table on R3, we see the following:

R3#sh ip bgp
BGP table version is 20, local router ID is 3.3.3.3
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
r> 9.9.9.9/32       9.9.9.9                  0    100      0 (20 10) i
r> 10.10.10.10/32   10.10.10.10              0    100      0 (20) i
r>i11.11.11.11/32   11.11.11.11              0    100      0 i
*>i12.12.12.12/32   172.20.1.12              0    100      0 200 i

R3 can see that the IP 9.9.9.9 came through AS 20 and 10, even though all routers are in the same major AS.

The last thing I’d like to point out is the split of the IGP (OSPF in this case) Both Sub-AS 10 and 30 are running OSPF area 0. We can see how many times the SPF algorithm has run in each:

R9#sh ip ospf
 Routing Process "ospf 1" with ID 9.9.9.9
 
        SPF algorithm executed 3 times
R3#sh ip ospf
 Routing Process "ospf 1" with ID 3.3.3.3
 
        SPF algorithm executed 7 times

Let’s force the algorithm to run again by adding another loopback on Router9 and advertising it into OSPF:

R9#conf t
Enter configuration commands, one per line.  End with CNTL/Z.

R9(config)#int lo2
R9(config-if)#ip address 99.99.99.99 255.255.255.255

R9(config-if)#router ospf 1
R9(config-router)#network 99.99.99.99 0.0.0.0 area 0

If we now check the SPF algorithm again in Both Sub-AS’s:

R9#sh ip ospf
 Routing Process "ospf 1" with ID 9.9.9.9
 
        SPF algorithm last executed 00:00:56.144 ago
        SPF algorithm executed 4 times
R3#sh ip ospf
 Routing Process "ospf 1" with ID 3.3.3.3
 
        SPF algorithm last executed 00:27:18.572 ago
        SPF algorithm executed 7 times

We can see in Sub-AS 10 the SPF algorithm ran 56 seconds ago. In Sub-AS 30 however, it has not forced the algorithm to run again, proving that these IGP domains are completely separate from each other.
So that’s the basics of Confederations. They can be very useful for a number of reasons. Just be sure to remember how exactly they operate. Any questions, feel free to ask

Challenging Cisco’s Might in Network Switches Difficult for Juniper


by

Juniper (NYSE:JNPR) has a very small presence in the network switches market compared to market leader Cisco (NASDAQ:CSCO). It’s current market share stands at around 4%, and we expect this to grow although at a slower rate in the coming years due to a maturing switches market and Cisco’s well entrenched position in it. There could however be more upside if Juniper is able to efficiently leverage its tie-up with IBM, selling its network switches under the IBM brand.

Juniper competes with second rung players like Alcatel-Lucent (NYSE:ALU), Huawei-3Com and HP (NYSE:HPQ), which are all competing fiercely for the pie left out by Cisco.
 .
While we expect Juniper’s network switches share will rise to nearly 6% by the end of the Trefis forecast period, Trefis members expect a market share level of close to 8%, representing a potential upside of 8% to JNPR stock.

We currently have a Trefis price estimate of $27.76 for Juniper’s stock, about 25% below the current market price of $36.95.


Cisco’s Dominant Market Position Difficult to Challenge

Cisco has consistently maintained a market share of 55-60% in the bottom layer switching market. It has well developed and longstanding relationships with all major customers. Cisco is widely considered synonymous with switches in the industry, and hence it’s difficult for other players like Juniper to make a cut into it’s share. Even a player like HP, which is industry leader in most of the other industries it operates in, could capture only 3-4% of market share and failed to dislodge Cisco.

IBM Tie-Up a Positive

To counter Cisco’s move into the server space, IBM has collaborated with Juniper whereby Juniper will be selling its network switches and routers under IBM’s brand name to complement IBM servers in data centers. Data centers are one of the fastest growing business segments for the networking business, and access to data centers is a major plus for Juniper. If Juniper is able to fructify its relationship with IBM, it can give a significant boost to its market share.

Saturday, January 29, 2011

IBM to Build Asia’s Largest Cloud Computing Centre in China

IBM, the world’s largest computer-services provider, said it won a contract to help China build Asia’s largest cloud- computing center by 2016.

The 620,000 square meter facility, which is to be owned by Range Technology, is expected to be completed in 2016, the companies announced on Tuesday. The data center aims to mainly serve government departments from China’s capital and across the country, but will also be open to banks and private enterprises.

The cloud computing center will be built in Langfang, a city between Beijing and Tianjin, in northern China. The data center is meant to support the development of a new information technology hub being built in the area, said IBM spokeswoman Harriet Ip. IBM and Range Technology signed an agreement on the data center last week in Chicago during Chinese President Hu Jintao’s state visit to the U.S.

The data centers are located along a river and will be built according to the latest green construction concepts, said IBM’s Steve Sams, VP of global site and facilities services, a unit of Global Technology Services. In some cases, water evaporation techniques have been used to cool a data center’s ambient air in Google and Microsoft cloud data centers, instead of electricity-powered air conditioning.

The data center will support independent software vendors and enterprises with software product development, as well as help Langfang city develop e-government services, administration systems and food-and-drug safety services.

IBM, the vendor for the project, did not disclose the cost of the data center. But the company said Range Technology is spending about US$1.49 billion on the building of the Langfang Range International Information Hub, of which the data center will be a part.

The cloud-computing center is expected to employ between 60,000 and 80,000 people, according to Ip’s e-mail. Cloud computing allows customers to save money by storing data on remote servers that can be accessed via the Internet.

Range Technology Development was established in 2009 as an Internet, data-center service and telecommunications-network- services provider, according to its website.

Hidden Treasures of BGP - Part 1


Types of BGP Tables

Till now we all believe that BGP is having only a single routing table where it used to store the routes and process for the best path calculation. But we all are mistaken here actually BGP maintains three table one for storing incoming routes from neighbours, one for sending the routes to neighbours and one for installing the routes where you actually find the routes with next-hop address. The tables are given below:-

a) Adj-RIB-in
b) Adj-RIB-out
c) Loc-RIB

Adj-RIB-in stores the unprocessed information received from its peers. Here the best path selection occurs as per BGP attributes and after conformation path is entered into the local bgp table i.e Loc-RIB. From the local RIB table it conform the next-hop address if it reachable by IGP then the route is entered into the main routing table.

Friday, January 28, 2011

Verizon buying big into cloud with $1.4B Terremark bid

Verizon Communications plans to buy cloud service provider Terremark Worldwide for about $1.4 billion in a deal that could significantly expand its cloud offerings for enterprises.

Read more of . . .
Verizon buying big into cloud with $1.4B Terremark bid from: Network World NetFlash. Click here.

static route pointing to the interface VS next-hop IP address?

What is the difference between static route pointing to the interface instead next-hop IP address?

It has to do with how the layer 2 resolution works.  When the route points to the interface, layer 2 resolution is performed towards the final destination, not the intermediary next hop.  In certain multi-access designs this can be problematic.

Protocol fundamentals – ARP


What is ARP and how does it actually work? I’m surprised at the amount of people who don’t know exactly what it does and how important it is.

To illustrate, I’m going to use this extremely simple network:

lan

Both of these systems are really just connected to a home router. Remember that these ports are really just switched ports. The only time they traverse a layer3 port is when they are sending traffic outside the local LAN.

ARP is the Address Resolution Protocol. Essentially all it does is resolve a logical IP address to a physical Hardware (MAC) address.

In the above diagram, if 10.20.30.108 wants to send traffic to 10.20.30.4, it will move down the IOS layers. It will eventually get down to layer2. The layer2 header needs to have both a source and a destination MAC address. 10.20.30.108 has the layer3 address already, but not layer2. This is where ARP comes into the picture.

10.20.30.108 will send a broadcast out onto the LAN asking that whoever holds 10.20.30.4 respond with it’s MAC address (In that broadcast it’ll let everyone know what the MAC address of 10.20.30.108 is – so they can reply). When 10.20.30.4 get’s that broadcast, it’ll respond with it’s OWN MAC address with a unicast.

Once 10.20.30.108 has received 10.20.30.4′s MAC address, it will add that mapping to it’s own local ARP cache. As long as that value is in the cache, it’ll know exactly how and where to send traffic bound for 10.20.30.4

As an example, wireshark to see exactly what is happening as below


The first ARP packet was a broadcast to the local lan asking for the owner of the 10.20.30.4 address. It also asks to respond to 10.20.30.108 (this ARP request also contains 10.20.30.108′s own MAC address) – The second packet is a simple unicast back to 10.20.30.108 letting it know that 10.20.30.4′s MAC address is 00:11:32:06:0c:8a

This can be verified as follows:

C:\Windows\system32>arp -a

Interface: 10.20.30.108 --- 0xb
  Internet Address      Physical Address      Type
  10.20.30.4            00-11-32-06-0c-8a     dynamic

ARP is one of the fundamental parts of TCP/IP – Make sure you know it :)

Deep Diving Router Architecture, Part III

Courtesy - Himawan Nugroho

In the previous two parts we have discussed a lot about the hardware architecture. So where do we go from here? Let’s now discuss the features and the applications running on top of the hardware architecture that we have been discussing so far. I’m running out of the pictures that are available and can be found in google to explain this topic. And obviously I can’t use the picture from my company’s internal document. So let this part be the picture-less discussion.

The following are few sample features and applications that are required from a modern and next generation router:

High Availability (HA) and Fast Convergence
Router fails eventually. The failure may happen on the route processor module, the power supply, the switch fabric, the line card, or the whole chassis somehow. The key point here is not on how to avoid the failure, but how to manage during the failure to minimize time required to switch the traffic to redundant path or module.

For most of us who like to see a network as a collection of nodes connected to each other, the failure might be only in either link or node failure. For these two cases, router vendors have been introducing Fast Convergence (FC) features in the product such as IGP FC and MPLS TE Fast Re-Route (FRR) to reduce the network convergence time to minimal. And the key point for this type of failure is to detect the failure as soon as possible. If the nodes are connected with direct link, the Loss of Signal (LoS) may be used to inform the failure to the upper layer protocol such as IGP. If it is not direct link, we may use a feature called Bidirectional Forwarding Detection (BFD) which basically sends hello packet from one end to the other.

When the hardware fails, we expect to see packet loss for fragment of time. In most cases this is inevitable and the only thing we can do is to minimize the packet loss or to reduce the convergence time. For a router with redundant route processor, let’s say the primary route processor fails and it has to switch over to the secondary route processor, it can use a feature called Non-Stop Forwarding (NSF) during the switch over time until the secondary route processor is ready to completely take over, to avoid any packet loss. NSF offers some degree of transparency, since the failure node can inform its neighbors that it’s going to down :) but make promises it will go back online again so please all neighbors don’t flush the routes from the routing table for certain period of time, and please keep forwarding the traffic to the failure node.

The failure node itself must use modular concept as explained in previous discussion. So the forwarding plane should be done in other location but the route processor, for example in the line cards. Before the failure, the router must run the Stateful Switchover (SSO) feature to ensure the redundant route processor is synchronized with the primary route processor, fabric and the line card. During the switch over, while waiting for initialization process of the secondary route processor to take over completely, forwarding packet is still done in the line card by using the last state of local forwarding table before the failure. So if the failure node can still forwarding the packet to the neighbors, even it uses the last forwarding table state before failure, and the neighbors are willing to continue forwarding the packet to the failure node because they have been informed it will go back online again soon, then we should not have any packet loss at all. Later the SSO/NSF feature should be able to return the forwarding table to the recent state once the secondary route processor has taken over completely.

The new HA feature has been pushed recently is the Non Stop Router (NSR). NSR is expected to offer full transparency to the neighbors. For NSF during the failure the IGP relationship is tear down, even the neighbors will continue using the routes from the failure node during the agreed period of time. With NSR, the IGP relationship should remain up during the switch over.

If we go back to the hardware design and architecture, we can see now the first requirement is to have the secondary route processor to be synchronized always with the other route processor, fabric and the line card. If this is not possible to achieve then we should see packet loss during the switchover. Obviously we all understand that if the failure is in the line card or fabric, while there is traffic passing through it, we should expect to see packet loss regardless of any HA features we enabled. And for modular switch fabric architecture, we should have several different modules for fabric and the failure of one module should not affect the total capacity of forwarding packets in the whole switch fabric.

Quality of Services
Quality of Services (QoS) feature in order to differentiated treatment to the packet is a must have requirement especially during network congestion. Where exactly the congestion may occur?

If we use the carrier class router architecture in Part II, we can see that the congestion may happen on the following:
- Egress queue, a queue in egress line card before physical interface: while waiting for the packet to be transmitted to the physical media
- Fabric queue, a queue to receive packet from switch fabric in egress line card: since it has to normalized the packet received from fabric if the packet must be converted to fixed-size cell, for example. Or because the egress queue is congested so this queue is becoming congested too
- Ingress queue, a queue before sending packet to switch fabric in ingress line card: as consequences of the congestion in fabric queue or in the fabric, this queue can be congested as well

Congestion may happen in the switch fabric itself. But normally carrier-class router has a huge capacity in forwarding inside the switch fabric to accommodate fully loaded chassis with all line cards. Unless if the switch fabric is modular and there is failure in some of the fabric modules that will reduce the capacity.

So the key here is we should be able to differentiate services in many points inside the router. For example, if the egress physical ports are congested, we should be able to ensure the high priority packet in egress queue will be transmitted first. Same case with the fabric queue. And even inside the fabric we should be able to prioritize some packet in case the fabric queue or the fabric itself is congested. And when there is congestion in egress queue, it should inform the fabric queue, that will inform the ingress queue to slow down sending the packet to the fabric. This mechanism is known as back pressure, and the communication from fabric queue to ingress queue normally is through the bypass link, and not through the fabric since for this intelligent fabric described in Part II it has only one way direction from ingress to egress, not the other way around. And slowing down the packet sent to the fabric actually means the ingress packet engine should start dropping low priority packets, so it can send lower rate of traffic to the ingress queue.

It is clear now where we can deploy QoS tools in different points inside the router. Policing, for example, should be done in ingress packet engine. Egress queue can use shaping or queuing mechanism and congestion avoidance tools. Fabric queue may need only to be able to inform the ingress queue in case there is congestion.

Btw, the QoS marking that is used inside the router is normally derived from the marking set to the packet such as CoS, DSCP or EXP. When the packet travels within the router, the external marking is used to create internal marking that will be used in forwarding path until the packet goes out from the router. It should be the task of ingress packet engine to do the conversion.

One other important point from QoS feature is the support of the recent hierarchical QoS model. In normal network, packet that comes to the router has only one tag or identification to distinguish the priority of the packet of one given source or flow. In MPLS network, the tag will be EXP bit. In normal IP network, the identification can be CoS or DSCP. And they are all associated to only one type of source or flow so there is only one QoS action need to be done to it. But how if there are multiple tags, and it is required to provide different QoS tools to different tag? Let’s say in Carrier Ethernet environment the packet that reaches the router comes with two 802.1q tags, the S-tag to identify the provider’s aggregation point for example, and the C-tag to identify different customer VLANs (this is known as Q-in-Q). We may want to do QoS action to the packet as a unit, it means we just need to apply the QoS to the S-tag, but we also want to apply QoS based on different C-tag. This means the router must support hierarchical QoS model where the main QoS class will impact the whole packet, while the child classes can be specific based on customer tag.

Multicast
In a network of multiple nodes, multicast traffic means a single packet coming from one source get replicated to multiple nodes depending on the request to join the multicast group. Now it’s our time to look in more detail and ask question: who is doing the replication inside the router?

Multicast packet can be distinguished easily from the destination multicast group address. Inside the router the replication can be in ingress line card, called ingress replication, or in egress line card, called egress replication. Using multicast control protocol such as PIM, the ingress line card should be able to know the destination line cards for any multicast group address. Let’s say we have two ports in the ingress line card, and multicast packet (S,G) is received in one port. From the lookup the ingress packet engine or network processor find out that the other port in the same line card is interested to the multicast group as well as some other line cards. Ingress line card may do ingress replication, to replicate the packet into multiple and send it to the other port in the same line card as well as to the other line cards.

Now, if we always do ingress replication there is a huge drawback in term of performance. Let say the rate of multicast packet received by ingress line card is X Gbps. And there are 10 egress ports, in different line card, that are interested to the multicast group. If ingress replication is being done, then the ingress card must multiply the packet into 10, meaning the total number of rate is 10X Gbps now, and this is the rate that is sent from the ingress line card to the switch fabric. In this scenario it’s better to use egress replication since the ingress line card just needs to send a single packet to each egress line card that is interested. And if there are multiple ports on the egress card that are interested to the same multicast group, the replication of the packets can be done by the egress line card in order to send the same packet to all those ports. This egress replication can avoid unnecessary huge number of traffic inside the ingress queue and the fabric in case of the ingress replication had been used.

In carrier-class router, the switch fabric is more intelligent it can do replication of multicast packet inside the fabric. So again, the ingress line card just need to send a single packet to the fabric, then based on interested egress line cards the fabric will replicate this packet and send it to those egress cards, then the egress line card can do another replication in case there is more than one port that is interested with the multicast group.

Performance and Scalability
Once you have reached this point, I guess now you have started asking questions in your head for any features or protocols: is it done in hardware or software? Is it done by central CPU or distributed in the line card? Is it done in ingress line card or egress? If yes, then good, finally we are making progress here.

Before I continue I would like to mention one critical component in the hardware for forwarding plane which is Ternary Content Addressable Memory (TCAM). In simple words, TCAM is a high speed memory that is used to store the entry of forwarding table or other feature such as access control list, in order to do high performance hardware switching. Remember the concept of pushing the forwarding table to the line card processor, then from the line card processor to the hardware? TCAM is used to stored the information. So now you know, we should ensure there is enough space there to keep the information, or in other words the TCAM is one limit point in forwarding path. If the route processor push more forwarding entries that the TCAM can handle, we may end up with inconsistent forwarding table between route processor and line card. This means, even the route processor knows what to do with the packet, but the hardware may not have the entry and will just drop it.

Looking at the modular architecture of next generation router, it is clear for us that in order to achieve non-blocking or line rate packet switching performance we should ensure that every components in the forwarding path should support the line rate performance. It means if we want to forward X Gbps traffic without any congestion, then the components from ingress processor and queue in ingress line card, the capacity of the fabric, the fabric queue, egress processor and egress queue in egress line card should be able to process X Gbps or even more. So if you want to know where the bottleneck inside the router, check the processing capacity of each component. If you know the capacity from the ingress line card to the fabric is only X Gbps, but you put more ports in ingress line card with total capacity more than X, it means you are doing over subscription. And by knowing the congested point you can figure out which QoS tools to be applied and where exactly you need to apply it. In this sample, using egress QoS won’t help as it is not the congestion point, since the congestion is in the queue to the fabric.

Now, why bother to keep increasing the route processor performance then, if we know the actual performance is in the forwarding plane that is done in the line cards? Well, because we still need the route processor to do the control plane function. You need a good CPU in order to process big number of IGP or BGP control packets. You still need a big memory to store the routes received from the neighbor before it can be pushed down to the hardware. You also need a good capacity for storage to keep the router software image as well as any system logging and crash dump information.

NGN Multi-Service Features and Application

It is common for an next generation network to carry multiple different services. The common applications other than multicast for IPTV, are MPLS L3VPN for business customer, Internet, L2VPN point to point and multipoint with VPLS and so on. The complexity comes when we have to combined and run the features at the same time.

For example, when we have MPLS-based network, the label imposition for the next hop is done in ingress line card. But how if we run another features such as one type of L2VPN that can be software based or performed in route processor? We may need to do the label imposition in egress line card because of this reason.

And how about if we have to do multiple lookup? For example, if we have to remove two MPLS tags on the last label switch router in case of Penultimate Hop Popping (PHP) is not being used in MPLS L3VPN network. First of all we need to do lookup to know what we need to do with the first or the topmost MPLS tag. Most probably we want to keep the top most to get the EXP bit for QoS. Then we have to do another lookup to see the VPN label on the second tag to associate it with the VRF. Last, after all the MPLS labels have been stripped off, we still need to do another lookup in IP forwarding table to know to which egress interface we should send the packet. Doing several lookups in the same location such as ingress may introduce us with the concept or recirculation, where the packet is looped inside the ingress line card. So after the first lookup the packet is not sent to the fabric but it will get the layer 2 information re-written with the destination of ingress line card itself, and the packet will be sent to the first hardware that processes incoming packet. So it looks like it’s just the next packet need to be processed by the line card.

Multicast VPN can give us a different challenge. But just to summarize, by knowing how the protocol and feature works, and the component inside the router that does specific task related to the feature, we can foreseen if any issues may occur during the implementation of the design. And we may be able to find the work around to overcome the issues.

Thursday, January 27, 2011

Juniper's Stratus, Falcon Go Beta


Juniper Networks Inc. (NYSE: JNPR)'s high-profile Stratus and Falcon projects are in the hands of customers, CEO Kevin Johnson said during the company's fourth-quarter earnings call Tuesday. (See Juniper Reports Q4.)

Stratus, the company's ambitious data-center fabric, shipped in beta form to a "major customer" during the final three months of 2010, Johnson said.

And the first beta release of Falcon -- software that will let Juniper's MX 3D routers become an Evolved Packet Core (EPC) for Long Term Evolution (LTE) networks -- is complete and likewise shipped to a "major service provider" in the quarter.

Johnson said both products should begin generating revenues in the second half of 2011.

Why this matters
Juniper has been a Wall Street darling for most of the past six months, but the euphoria won't last if these two projects don't make the big splash that Juniper keeps promising.

Stratus, the more ambitious of the two, was first announced two years ago. Juniper keeps dropping hints about it, but the product launch was never going to be earlier than 2011. Stratus would compete against the Unified Computing System from Cisco Systems Inc. (Nasdaq: CSCO) and the modestly named Brocade One from Brocade Communications Systems Inc. (Nasdaq: BRCD).

Juniper showed off Falcon at CTIA nearly a year ago. Plenty of rivals have EPCs, but Juniper particularly needs to have an alternative to router rivals Alcatel-Lucent (NYSE: ALU) and of course Cisco, which is perceived to have snatched Starent Networks away from Juniper.

Wednesday, January 26, 2011

Junos - Class-of-Service Operational Mode Quick Commands

Following are the few commands related to JUNOS COS and these are quite useful for preparing for the JNCIE exam or even for troubleshooting the COS issues in the production network.
Below summarizes the command-line interface (CLI) commands you can use to monitor and troubleshoot class of service (CoS).

Task Command
Display the entire CoS configuration, including system-chosen defaults. show class-of-service 
(J-series routing platform only) Display trigger points and associated rates for CoS adaptive shapers.show class-of-service adaptive-shaper 
For each CoS classifier, display the mapping of code point value to forwarding class and loss priority.show class-of-service classifier 
Display the mapping of CoS code point aliases to corresponding bit patterns.show class-of-service code-point-aliases 
Display data points for each CoS random early detection (RED) drop profile.show class-of-service drop-profile 
(M320 routers and T-series routing platforms only) Display the mapping of CoS schedulers to switch fabric traffic priorities and a summary of scheduler parameters for each priority.show class-of-service fabric scheduler-map 
(M320 routers and T-series routing platforms only) Display CoS switch fabric queue statistics.show class-of-service fabric statistics 
Display the mapping of forwarding class names to queue numbers.show class-of-service forwarding-class 
Display entire CoS configuration as it exists in the forwarding table. show class-of-service forwarding-table 
Display the mapping of code point value to queue number and loss priority for each classifier as it exists in the forwarding table.show class-of-service forwarding-table classifier 
For each logical interface, display either the table index of the classifier for a given code point type or the queue number (if it is a fixed classification) in the forwarding table.show class-of-service forwarding-table classifier mapping 
Display the data points of all random early detection (RED) drop profiles as they exist in the forwarding table.show class-of-service forwarding-table drop-profile 
(M320 routers and T-series routing platforms only) Display the scheduler map information as it exists in the forwarding table for switch fabric. show class-of-service forwarding-table fabric scheduler-map 
(J-series routing platform only) Display the mapping of code point value to loss priority as it exists in the forwarding table.show class-of-service forwarding-table loss-priority-map 
(J-series routing platform only) For each logical interface, display the loss priority table index.show class-of-service forwarding-table loss-priority-map mapping 
Display mapping of queue number and loss priority to code point value for each rewrite rule as it exists in the forwarding table.show class-of-service forwarding-table rewrite-rule 
For each logical interface, display the table identifier of the rewrite rule map for each code point type.show class-of-service forwarding-table rewrite-rule mapping 
For each physical interface, display the scheduler map information as it exists in the forwarding table.show class-of-service forwarding-table scheduler-map 
For Adaptive Services (AS) PIC link services IQ interfaces (lsq) only, display fragmentation properties for specific forwarding classes.show class-of-service fragmentation-map 
Display the logical and physical interface associations for the classifier, rewrite rules, and scheduler map objects.show class-of-service interface 
Display the configured shaping rate and the quality of service (QoS) adjusted shaping rate for each logical interface set configured for hierarchical class of service (CoS).show class-of-service interface-set 
(J-series routing platform only) Display mapping of code point value to loss priority.show class-of-service loss-priority-map 
Display the mapping of forwarding classes and loss priority to code point values.show class-of-service rewrite-rule 
(M-series and T-series routing platforms only) Display mapping of CoS objects to routing instances.show class-of-service routing-instance 
Display mapping of schedulers to forwarding classes and a summary of scheduler parameters for each entry.show class-of-service scheduler-map 
For Gigabit Ethernet IQ and Channelized IQ PICs only, display traffic shaping and scheduling profiles.show class-of-service traffic-control-profile 
(J-series routing platform only) Display virtual channel information.show class-of-service virtual-channel 
(J-series routing platform only) Display virtual channel group information.show class-of-service virtual-channel-group 


My Blog List

Networking Domain Jobs